专利摘要:
A system and method for providing on-line, real-time, transparent data migration from an existing storage device to a replacement storage device. The existing and replacement storage devices are connected as a composite storage device that is coupled to a host, network or other data processing system. The replacement storage device includes a table which identifies data elements that have migrated to the replacement storage device. When a host system makes a data transfer request for one or more data elements, the replacement storage device determines whether the data elements have been migrated. If the data elements have migrated, the replacement storage device responds to the data transfer request independently of any interaction with the existing storage device. If the data elements have not migrated, the replacement storage device migrates the requested data elements and then responds to the data request and updates the data element map or table. When not busy servicing other requests, the replacement storage device operates in a background mode to migrate data elements so the data migration can occur concurrently with and transparently to system operations.
公开号:US20010001870A1
申请号:US09/735,023
申请日:2000-12-12
公开日:2001-05-24
发明作者:Yuval Ofek;Moshe Yanai
申请人:Yuval Ofek;Moshe Yanai;
IPC主号:G06F3-0607
专利说明:
[0001] This is a continuation-in-part of co-pending application for U.S. Pat. Ser. No. 08/522,903 filed Sep. 1, 1995 for a System and Method for On-Line, Real Time, Data Migration. [0001] BACKGROUND OF THE INVENTION
[0002] 1. Field of the Invention [0002]
[0003] This invention relates to data storage systems and more particularly, to a system and method for on-line replacement of an existing data storage subsystem. [0003]
[0004] 2. Description of Related Art [0004]
[0005] Data processing centers of businesses and organizations such as banks, airlines and insurance companies, for example, rely almost exclusively on their ability to access and process large amounts of data stored on a data storage device. Data and other information which is typically stored on one or more data storage devices which form part of a larger data storage system is commonly referred to as a database. [0005]
[0006] Databases are nearly always “open” and constantly “in use” and being accessed by a coupled data processing system, central processing unit (CPU) or host mainframe computer. The inability to access data is disastrous if not a crisis for such business and organizations and will typically result in the business or organization being forced to temporarily cease operation. [0006]
[0007] During the course of normal operations, these businesses and organizations must upgrade their data storage devices and data storage systems. Although such upgrading sometimes includes only the addition of data storage capacity to their existing physical systems, more often than not upgrading requires the addition of a completely separate and new data storage system. In such cases, the existing data on the existing data storage system or device must be backed up on a separate device such as a tape drive, the new system installed and connected to the data processing unit, and the data copied from the back-up device to the new data storage system. Such activity typically takes at least two days to accomplish. If the conversion takes more than two days or if the business or organization cannot withstand two days of inoperability, the need and desire to upgrade their data storage system may oppose an insurmountable problem. [0007]
[0008] Some prior art data copying methods and systems have proposed allowing two data storage systems of the same type, a first system and a second system, to be coupled to one another, and allowing the data storage systems themselves to control data copying from the first to the second system without intervention from or interference with the host data processing system. See for example, the data storage system described in U.S. patent application No. 08/052,039 entitled REMOTE DATA MIRRORING, fully incorporated herein by reference, which describes one such remote data copying facility feature which can be implemented on a Symmetrix 5500 data storage system available from EMC Corporation, Hopkinton, Mass. [0008]
[0009] Although such a system and method for data copying is possible, in most instances, the first and second data storage systems are not of the same type, or of a type which allow such a “background” data migration to take place between the two data storage systems, unassisted by the host and while the database is open. Additionally, even on such prior art data storage systems, migrating data as a “background” task while the database is “open” does not take into account the fact that the data is constantly changing as it is accessed by the host or central processing unit and accordingly, if the old system is left connected to the host, there will always be a disparity between the data which is stored on the old data storage system and the data which has been migrated onto the new data storage system. In such cases, the new data storage system may never fully “catch up” and be able to be completely synchronized to the old data storage system. [0009]
[0010] Accordingly, what is needed is a system and method for allowing data migration between a first data storage system and a second data storage system while the database is open and in real-time, completely transparent to the host or data processing unit. [0010] SUMMARY
[0011] This invention features a system and method for providing on-line, real-time, transparent data migration between two data storage devices. The system includes a first data storage device which was previously coupled to an external source of data including a data processing device such as a host computer, or a network which may be connected to a number of data processing devices such as a number of host computers. The data processing device such as a host computer reads data from and writes data to the data storage device. The first data storage device initially includes a plurality of data elements currently being accessed by the data processing device. [0011]
[0012] At least one second data storage device is provided which is coupled to the first data storage device and to the data processing device, for storing data elements to be accessed by the data processing device. The second data storage device preferably includes a data element map including at least an indication of whether or not a particular data element is stored on the second data storage system. [0012]
[0013] In one embodiment, the second data storage system independently migrates data from the first to the second data storage system independent of the source. In another embodiment, the second data storage system is responsive to the external source, for migrating data from the first to the second data storage system. [0013]
[0014] In yet another embodiment, the data processing device issues a data read request (in the case of a read data operation), or a data write command (in the case of a write operation). The request is received by the second data storage device. In the case of a read operation, second data storage device examines the data map or table to determine whether or not the data has been migrated to and is stored on the second data storage device. If it is determined that the data is stored on the second data storage device, the data is made available to the requesting device. [0014]
[0015] If the data is not stored on the second data storage device, the second data storage device issues a data request, in the form of a read data command, to the first data storage device, obtains the data and makes the data available to the requesting device. The data received from the first data storage device is also written to the second data storage device and the data map updated. [0015]
[0016] In the case of a write operation, one embodiment contemplates that if the data received from the data processing device is destined for a location on the data storage system that has not yet been copied or ‘migrated’ from the older or first data storage device (a data storage location marked in the data map as ‘need to migrate’), and the data is not a full or complete data element (for example, not a ‘full track’ of data) the write operation is suspended, the “complete” data element from the corresponding location (a ‘full track” for example) on the first data storage device is read into the cache memory on the second data storage device, the in-cache flag or bit set, the data storage location marked or identified as ‘write pending’, and the write operation resumed meaning that the data will be ‘written’ to and over the ‘full track’ of data now stored in the cache memory of the second data storage system. In other embodiments, the older data may not be retrieved from the first or older data processing device if the new data to be written is known to be a complete data element (a ‘full track’ for example). [0016]
[0017] When the second data storage device is not busy handling data read or write requests from a coupled data processing device, such as a host computer, the second data storage system examines its data map/table to determine which data elements are resident on the first data storage device and are not stored on the second data storage device. The second data storage device then issues read requests to the first data storage device requesting one or more of those data elements, receives the data, writes the data to the second data storage device and updates the data map/table to indicate that the data is now stored on the second data storage device. [0017]
[0018] In this manner, there is no need to perform time consuming off-line data migration between first and second data storage devices but rather, the data copying or migration can occur in real-time, while the data storage devices are on-line and available to the host or other requesting device, and completely transparent to the coupled data processing device. [0018]
[0019] In the preferred embodiment, the second data storage device further includes or is coupled to a data storage device system configuration device, such as a computer, which provides configuration data to the data element map or table on the second data storage device, allowing the second data storage device to be at least partially configured in a manner which is generally similar or identical to the first data storage device. [0019]
[0020] Additionally, the preferred embodiment contemplates that the second and first data storage devices are coupled by a high speed communication link, such as a fiber optic link employing the “ESCON” communication protocol. The preferred embodiment also contemplates that the data storage device includes a plurality of data storage devices, such as disk drives. In this case, data elements may include one or more of a disk drive volume, track or record. [0020] BRIEF DESCRIPTION OF THE DRAWINGS
[0021] The appended claims particularly point out and distinctly claim the subject matter of this invention. The various objects, advantages and novel features of this invention will be more fully apparent from a reading of the following detailed description in conjunction with the accompanying drawings in which like reference numerals refer to like parts, and in which: [0021]
[0022] FIG. 1 is a schematic diagram of an exemplary data processing and data storage system on which the system and method for providing on-line, data transparent data migration between first and second data storage systems in accordance with the present invention may be accomplished; [0022]
[0023] FIG. 2 is a schematic illustration of a data element map or table; [0023]
[0024] FIG. 3 is a flowchart outlining the steps of providing on-line, transparent data migration between first and second data storage systems according to the method of the present invention; [0024]
[0025] FIG. 4 is a flowchart illustrating the steps for providing data migration between first and second data storage systems without data storage device or host system intervention when the second data storage device is not busy handling data requests from the host or data processing device; [0025]
[0026] FIG. 5 is a schematic diagram of another embodiment of a data processing and data storage system on which the system and method for providing on-line, data transparent data migration between first and second data storage systems in accordance with the present invention may be accomplished; [0026]
[0027] FIG. 6 is a flowchart illustrating the steps for connecting the second data storage system without interrupting the operation of the data processing system; [0027]
[0028] FIG. 7 is a detailed flowchart illustrating the steps of a procedure of FIG. 6; [0028]
[0029] FIG. 8 is a schematic diagram of another embodiment of a data processing and data storage system incorporating this invention; [0029]
[0030] FIG. 9 is a flowchart illustrating the steps for shadowing the operation of the circuit in FIG. 8; [0030]
[0031] FIG. 10 depicts a set of registers that are useful in accordance with another aspect of this invention; [0031]
[0032] FIG. 11 is a flow chart of the steps for a copy block program operating in accordance with this other aspect of this invention; [0032]
[0033] FIG. 12 is a flow chart of a program that controls an operating mode for the copy block program of FIG. 11; and [0033]
[0034] FIG. 13 graphically depicts the advantages of the implementation of FIGS. 10 through 12. [0034] DESCRIPTION OF ILLUSTRATIVE EMBODIMENTS
[0035] The present invention features a system and method for providing on-line, real-time, transparent data migration between two data storage systems, at least one of which is coupled to a data processing device such as a host computer. [0035]
[0036] An exemplary system [0036] 10, FIG. 1 on which the present invention may be performed and implemented includes a host computer, central processing unit or other similar data processing device 12. The data processing device 12 is initially coupled to a first data storage system 14. In most instances, the first data storage system 14 is an older data storage system which is either not large enough to handle the needs of the data processing device 12, or for some other reason is going to be completely or partially replaced or augmented by the addition of a second data storage system 16.
[0037] The first data storage system [0037] 14 is initially coupled to the data processing device 12 by means of a data communication link 19. The second data storage system 16 is coupled to the first data storage system 14 by means of one or more data communication paths 20 a, and/or 20 b. Examples of data communication paths 20 a-20 b include an IBM “bus and tag” connection well known to those skilled in the art, and higher speed fiber optic connections such as an ESCON data connection.
[0038] If the first and second data storage systems [0038] 14, 16 have an incompatible data communication protocol or interface, a protocol converter 22 may be provided on one or more of the data communication links 20 a, 20 b as required, and as is well known in the art.
[0039] The second data storage system [0039] 16 includes a data map or table 24 of data elements which are stored on at least the second data storage system 16. The data map or table is established during the set-up or configuration of the second data storage system 16 and is dependent on the particular configuration of the second data storage system 16.
[0040] Preferably, the data map/table [0040] 24 also includes information about data elements which are stored in the first data storage system 14, the use of such a data map/table will be explained in greater detail below.
[0041] The second data storage system [0041] 16 is typically and preferably coupled to a data storage system configuration device 26 such as a computer, which allows the user to configure the second data storage system 16 and the data map/table 24 as desired by the user. In the preferred embodiment, the second data storage system 16 is at least partially configured exactly as the first data storage system 14 is configured in terms of the number of logical devices, storage size, storage system type (3380/3390, for example) etc.
[0042] In the preferred embodiment, the data storage system configuration device [0042] 26 allows the user to configure at least a portion of the data storage area on second data storage system 16 to include data element storage locations or addresses which correspond to data element storage addresses on the first data storage system 14.
[0043] In the preferred embodiment, the second data storage system [0043] 16 is a disk drive data storage system employing a large number of fixed block architecture (FBA) formatted disk drives 17 a-17 n, and adapted for storing large amounts of data to be accessed by a host computer or other data processing device 12. The exemplary second data storage system 16 also typically includes a cache memory 18 which serves to hold or buffer data read and write requests between the second data storage system 16 and the host or other data processing device 12. Such data storage systems are well known to those skilled in the art and include, for example, the Symmetrix 5500 series data storage system available from EMC Corporation, Hopkinton, Mass., a description of which is incorporated herein by reference.
[0044] Initially, the second or new data storage system [0044] 16 is first coupled to the first data storage system 14 by means of one or more data communication links or paths 20 a, 20 b. After the second data storage system 16 has been configured using a system configuration device 26 or other similar or equivalent device, or by the host 12, the second data storage system 16 is coupled to the host computer 12 or other data processing device by means of a data communication path 28.
[0045] Preferably, data communication path [0045] 28 is a high speed communication path such as a fiber optic “ESCON” communication path, although any and all other communication paths are considered to be within the scope of the present invention. Immediately before connecting data communication path 28 between the host or other data processing unit 12 in the second data storage system 16, the previously existing data communication path 18 between the host 12 and the first data storage system 14 is disconnected or severed as illustrated at arrow 30.
[0046] Thus, in contrast with the prior art whereby the host or other data processing system [0046] 12 must be taken off line for a number of days in order to allow for backing up of data on the first data storage system 14 followed by the replacement of the first data storage system 14 with a second data storage system 16 and subsequent copying of all of the data onto the new data storage system 16, or a host which remains coupled to the original ‘first’ data storage system 14, the present invention only requires the host computer or other data processing device 12 to be off line or service interrupted for a relatively short period of time (the procedure typically takes approximately 10 minutes or less), while the first data signal path 19 is severed or disconnected and the second data signal path 28 is established between the second or new data storage system 16 and the host computer or other data processing device 12.
[0047] Accordingly, after the second data storage system [0047] 16 has been connected to the host or other data processing unit 12, whenever the host or data processing unit 12 issues a request to read data from or write data to “its” data storage system, the request is received by the second data storage system 16. Using a bit or flag from the data map/table 24 previously established and configured, the second data storage system 16, by scanning data map/table 24, determines whether or not the data requested (in the case of a read operation) is stored on the first data storage system 14 or on the second data storage system 16.
[0048] Such a hierarchical data map/table [0048] 24 is further explained and exemplified herein as well as in U.S. Pat. Nos. 5,206,939 and 5,381,539 assigned to the assignee of the present invention and both fully incorporated herein by reference.
[0049] If the data is already stored in the second data storage system [0049] 16, the second data storage 16 retrieves the data (perhaps temporarily storing the data in cache memory 18) as is well known in the art, and makes the data available to the host or other requesting data processing device 12.
[0050] If the requested data is not on the second data storage system [0050] 16, channel or real-time data handling process 25 of the second data storage system 16 issues a read data request to the first data storage system 14 in the manner and format native or known to the first data storage system 14 (for example, standard IBM data read commands). Channel or real-time data handling process 25 is, in the preferred embodiment, a software program comprising a series of commands or instructions which receives one or more commands from the second data storage system interface to the host or CPU (typically called a “channel”), interprets those commands, and issues one or more corresponding commands which can be acted upon by the first data storage system. Such an ‘interpreter’ type of software is well known to those skilled in the art.
[0051] The first data storage system [0051] 14 then retrieves the requested data and provides it to the second data storage system 16. The second data storage system 16 then makes the data available to the host or other data processing unit 12 which has requested the data.
[0052] Since the second data storage system now has a copy of the data, the data will be written to the second data storage system [0052] 16 and the appropriate data map/table 24 flags or bits updated to indicate that the data has been migrated to the second data storage system 16 so that next time the same data element is requested, the second data storage system 16 will have the data already stored on the system and will not have to request it from the first data storage system.
[0053] Further, as will be explained in greater detail below, the second data storage system [0053] 16 can perform a “background” data migration procedure or process 27. The “background” data migration procedure of process 27 is, in the preferred embodiment, a software program including a series on instructions which coordinate, monitor and control data migration whereby whenever the second data storage system is not busy handling data input/output requests from the host or other data processing device 12, the migrate process 27 of the second data storage system 16 determines which data on the first data storage system has not been copied by reading a specified flag or bit in its data table/map 24, and copies or “migrates” the data from the first data storage system 14 to the second data storage system 16 completely transparent to the host 12, and often in parallel with the channel process 25 which may be retrieving data from the first data storage system 14 in response to requests from the host or CPU 12, while maintaining full accessibility to the data by the host or other data processing device 12.
[0054] An exemplary data element map/table [0054] 24 is shown in greater detail in FIG. 2. In the preferred embodiment, the data map/table 24 is organized in a hierarchical fashion. For example, for the preferred embodiment wherein the data storage system includes a plurality of longer term data storage devices such as disk drives 17 a-17 n, and wherein each disk drive is partitioned into one or more logical “volumes” and each volume comprises a number of disk drive tracks, the data map/table 24 will first have an entry 50 for each physical and/or logical device such as a disk drive.
[0055] The device entry [0055] 50 will be followed by an entry 52 for a first logical volume, followed by one or more entries 54 a-54 c for each track of the device which comprises the logical volume 52. The entries 52, 54 a-54 c for the first logical will be followed by entry line 56 for the second logical volume configured on the physical device indicated by the entry at line 50.
[0056] All information about the data storage system and each device in the data storage system with the exception of the “data in cache” indication flag or bit [0056] 58 is stored in hierarchical format in the data map/table 24. Thus, whenever the second data storage system 16 desires or needs to obtain information about a particular data element (be it an individual data record, track or volume), the data storage system 16 scans the data map/table 24 beginning at the device level 50 to determine whether or not the desired criterion or characteristic has been established for any track or volume of a device.
[0057] There will be a ‘flag’ or other similar indicator bit set, or other indication of the desired characteristic in the device entry [0057] 50, in the volume entry 52 and in the appropriate track entry 54 if the desired characteristic is found in that portion of the data storage device represented by the data map/table 24.
[0058] For example, the preferred embodiment of a data map/table [0058] 24 includes a write pending flag or bit 61 which is set if a particular data element is presently stored in cache 18 of the second data storage system 16 and must be written to longer term storage such as a disk drive 17 a-17 n. For exemplary purposes, assuming that track 2 of volume 1 is in cache 18 in the second data storage system 16 and write pending, the write pending flag or bit 61 and the in cache bit 58 at line entry 54 b (for track two) will be set, as will the write pending bit 61 of volume 1 at line 52 of the data map/table 24, as will the write pending bit 61 of the device at line 50.
[0059] Thus, if the second data storage system [0059] 16 wishes to determine whether or not a particular track or record which has been requested is write-pending or has been migrated to the second system or of the status of some other attribute or characteristic, the data storage system 16 first determines which device or disk drive 17 a-17 n the data element is stored on and then checks the appropriate indicator flag bit for that device. If the particular indicator flag bit is not set for that device, then the second data storage system 16 knows immediately that no lower level storage unit or location such as a volume or track in that device has that attribute. If any lower data storage element in the hierarchical structure such as a track or volume includes the attribute, than the attribute or flag bit for the device will be set.
[0060] Similarly, if a particular data storage location such as a record or track which is part of a logical volume has the requested attribute, then the corresponding attribute or flag bit for the volume will be set. The data storage system [0060] 16 can thereby quickly determine whether any data storage location having a lower level than the volume or other similar logical or physical partition being examined has the particular attribute, without scanning or searching each and every lower level data storage location.
[0061] The “in-cache” flag or bit is an exception to the hierarchical structure in that since each line or entry [0061] 50-56 of the data map/table 24 is directly addressable, the second data storage system directly addresses the table entry line for a particular data element when it must inquire or “look-up” whether that particular data element is presently “in-cache”. It is understood, however, that this flag or bit could be managed in a hierarchical fashion without departing from the scope of this invention.
[0062] In addition to the in-cache bit or flag [0062] 58 and the write pending flag or bit 61, the data map/table 24 which is one feature of the present invention includes, in the preferred embodiment, other flag bits 62 such as an invalid track format flag or bit, and an indication of whether or not data on a particular device, volume or track needs migration or has been migrated from the first to the second data storage system 14/16 respectively, as shown generally by flag or indicator bit 60.
[0063] Data map/table [0063] 24 may further include a physical address 64 entry for each element in the map or table 24, which identifies the beginning data address 64 at which the corresponding data element can be found on the disk drive 17 a-17 n of the new or second data storage system 16.
[0064] The operation of the method according to the present invention will be described in greater detail beginning with step [0064] 100, FIG. 3, wherein the second data storage system 16 receives a data element read or write request from the host or other data processing device 12, step 100. The method next determines if the request or command is a read or a write request, step 101. If the command is a read command, the channel handling process 25 of the second data storage system 16 next determines if the requested data is already stored in the second data storage system 16, step 102, by reading its data table map/table 24.
[0065] If the data is stored on the second data storage system, step [0065] 102, the second data storage system 16 will make the data available to the host or other requesting data processing device 12, step 104, and return to step 100 to await receipt of a new data read or write request.
[0066] If, however, at step [0066] 102, the second data storage system 16 determines that the data is not presently stored on the second data storage system 16, the second data storage system 16 will generate a request to the first data storage system 14 to read the data, step 106.
[0067] The command or request to read data from the first data storage system [0067] 14 takes the same form as a read data command which would be issued from the host 12. Thus, for example, if the host 12 is an IBM or IBM compatible host or data processing device, the second data storage system 16 will issue an IBM compatible “read” command to the first data storage system 14. The channel and migrate processes 25,27 of the second data storage system 16 maintain a list of commands native to the first data storage system 14 and can easily convert command types, if necessary, from a first command type issued by the host 12 and understood by the second data processing system 16, to a second command type understood by the first data storage system 14.
[0068] Subsequently, the second data storage system [0068] 16 receives the requested data from the first data storage system 14, step 108 and writes the data to the cache memory 18 of the second data storage system 16 while updating the data element map/table 24, step 110. The second data storage system 16 then provides an indication to the host or data processing device 12 that the data is ready to be read, step 112. Subsequently, the second data storage system 16 will write the data from cache memory 18 to a more permanent storage location, such as a disk drive, on the second data storage system 16, step 114, followed by a final update to one or more bits or flags of the data element map/table 24, step 116.
[0069] Thus, in the case where requested data is not yet stored on the second data storage system [0069] 16, the “read request” command from the host 12 results in the second data storage system 16 “migrating’ the data from the first data storage system 14 to the second data storage system 16.
[0070] If the host or other data processing system [0070] 12 issues a write request or command, step 120, the channel process 25 of the second data storage system 16 determines if the data to be written has been previously migrated from the first to the second data storage system, step 122. If the data has been previously migrated, step 122, the second data storage system writes the data to cache and updates any necessary flags or bits in the data map/table 24, step 110. Processing continues as previously described.
[0071] If, however, the data has not been previously migrated, step [0071] 122, the method of the present invention next determines, by the type of command or request issued by the host (for example in the case of IBM host commands), whether or not the write request is for a full or complete data element storage location, such as a full or complete “track” of data, step 124. If the write request is for a full “track” or other similar type of data block or content, the second data storage system does not need to worry about migrating the date from the first data storage system 14 since all the “old” data is being replaced by the current command and therefore, processing continues to step 110 as previously described.
[0072] If however, the method determines that the write request is for less then a full or complete data block or confine, such as a track, step [0072] 124, the method next temporarily suspends handling of the write request, step 126 and issues a “read” command for the full or complete “track” to the first data storage system 14, and reads a predetermined amount of data (a whole track of data for example), step 128, and copies the full “track” of data to the cache memory 18 of the second data storage system 16. The new data to be written is then written into the proper memory location in cache memory 18 (the occurrence of the actual “write” command), the data table/map 24 updated (for example, to indicate that the data is in cache memory 18 [data in cache bit set], that a write is pending on this data [write pending bit set], and that the data elements have been migrated [data needs migration bits re-set]) and the host or other central processing unit 12 informed that the write command is complete.
[0073] At some later time, the data in cache memory [0073] 18 which has been flagged as write pending is copied to a more permanent storage location, such as a disk drive, and the write pending bit reset.
[0074] Typically, data write requests are performed to update only a portion of the total or complete number of data elements stored in a predetermined data storage element or physical/logical confine (such as a disk drive track). The present invention, however, also realizes that in some cases, such as when the host or data processing unit [0074] 12 provides an indication that both the data structure (format) as well as the actual data contents are to be updated, reading old data from the first data storage system 14 may be eliminated since all data and data format or structure will be updated with the new write request. Such a data and format write command is so infrequent, however, that the preferred embodiment contemplates that each write request will cause a write request to be read from the first data storage system 14.
[0075] The method of present invention also allows the second or new data storage system [0075] 16 to provide transparent or “background” data migration between the first data storage system 14 and the second data storage system 16 irrespective of or in parallel with the data transfer or migration caused by the channel process which is serving the “channel” between the host 12 and the second data storage system 16. Since the goal of providing the second or new data storage system 16 is to generally provide enhanced or increased capabilities to the host or other data processing system 12, it is therefore desirable to migrate the data as quickly yet as unobtrusively as possible from the first to the second data storage system.
[0076] Thus, with the background migrate or copy “task” or “process” [0076] 27, the method of the present invention which is a series of software instructions executed by a central processing unit in the second data storage system 16 according to the present invention (such hardware and software as is well known in the art, see for example the EMC Symmetrix series 5500 data storage systems), the present method first determines whether the second data storage system 16 is completely busy servicing read or write data requests from the host or other connected data processing system 12, step 200, FIG. 4. If the second data storage system 16 is completely busy handling such requests to and from the host or data processing system 12 or completely busy handling other data input/output (I/O) operations in the second data storage system 16, further processing does not take place but instead the migrate process 27 awaits a “no busy” or “available” indication from the operating system of the second data storage system 16.
[0077] Once the second data storage system [0077] 16 is not busy handling internal input/output (I/O) requests or requests from the host or data processing device 12, the second data storage system 16 reads the data map/table 24, step 202 and determines which data elements have not been copied from the first data storage system 14 to the second data storage system 16, step 204.
[0078] As previously mentioned, during initial configuration of the second data storage system [0078] 16, before the second data storage system comes “on line”, the user or system engineer will utilize a system configuration device 26, such as a personal computer or other input device, to configure at least a portion of the data storage locations 17 a-17 n in the second data storage system 16 to exactly emulate (i.e. have the same memory addresses) the data storage system configuration of the first or older data storage system 14. Generally, the new or second data storage system 16 will have a greater storage capacity than the first or “old” data storage system 14 and therefore, additional storage areas or locations will become available. Therefore, if the first data storage system 14 includes a predetermined number of drives or volumes, each drive or volume having a certain number of tracks or records, the second data storage system will be configured to imitate such a configuration.
[0079] Once the second data storage system [0079] 16 has determined that least one data element (such as a track) has not been copied from the old or first data storage system 14, the second data storage system 16 issues a request to the first data storage system 14 for the data element, step 206. Once received, the second data storage system 16 stores the data on the second data storage system 16 (typically in cache memory 18), step 208, updates the second data storage system data map/table 24, step 210, and returns to step 200 to determine whether or not there is a pending data read or write request from the host or other data processing system 12.
[0080] In one embodiment, the present invention contemplates that it may be desirable to “prefetch” data from the first data storage system [0080] 14 to the second data storage system 16. For example, the migrate or copy process 27 may, using commands native to the first data storage system 14, issue a prefetch or “sequential” data access request or command to the first data storage system 14, to cause the first data storage system 14 to continue to fetch or ‘prefetch’ a certain number of data elements to the cache memory 18 of the second data storage system 16. Such prefetching can significantly speed up the transfer of data between the first and second data storage systems 14,16 by greatly reducing the number of “read” commands which must be passed between the data storage systems.
[0081] In another embodiment, the migration process [0081] 27 may determine that one or more read requests from the host 12 are part of a sequence of such read requests. In such an instance, the channel process 27 may take the current address of data being requested by the host 12 and increase it by a predetermined number. For example, if the host 12 is currently requesting data from an address ‘411’, the channel process 25 will issue a read request to the first data storage system 14 for the data at address 411. Generally simultaneously, the channel process will pass an indication to the migrate process 27 to begin prefetching or migrating data from address ‘413’. Thus, the migrate process 27 will be used to insure that the second data storage system 16 gets ‘ahead’ of the channel process 25 and the actual data requests from the first data storage system 14. The channel process 25 will handle requests from the host 12 for data at addresses 411 and 412. Subsequent requests will already be in cache in the second data storage system 16 and quickly handled by the second data storage system 16.
[0082] The foregoing description presents one embodiment of a unique data storage system and method that allows a new or second data storage system to be connected to an existing host or other data processing device with essentially no time lost in accessing the data stored on a first or donor data storage system. During the process there is real time, on-line availability of the data to the host or other connected data processing device, so normal operations of a data center can proceed. Data not involved with transfers to or from a host migrates to the new data storage system concurrently with on-line operations. In essence, FIGS. 1 through 4 depict one embodiment of a method and apparatus for connecting the new or replacement storage device to the host system with its existing or donor storage device to form a composite storage device. One transfer path enables transfers between the host system and the composite storage device. A data migration path migrates data from the existing storage device to the replacement storage device within the composite memory. A control in the replacement storage device controls the operation of the transfer path and data migration path until all data has migrated from the existing or donor storage device to the replacement storage device. Thereafter all host system transfer requests are processed in the replacement storage device and the existing or donor storage device can be removed from the system or assigned other functions. [0082] ALTERNATE EMBODIMENTS
[0083] While the foregoing embodiment performs all these functions, the process of configuring a system such as by disconnecting the old data storage system [0083] 14 in FIG. 1 from the CPU/host 12 and connecting the CPU/host 12 to the new data storage system 16 takes some time, usually less than one hour, during which all applications must suspend operations. In some data centers such an interruption is not acceptable. In others, the preparation for such an interruption can represent a formidable task.
[0084] In some data centers the connections between a host and old data storage system may use only a portion of the available channels. For example, in some centers a host computer with four available channels might use only two channels for communicating with the old storage device [0084] 14. Such availability often exists when the old storage device 14 connects to multiple host computers or when a single host computer connects to multiple storage devices.
[0085] FIG. 5 depicts one such system that includes the host computer [0085] 12, an old, or donor, storage device 14, and a new, or target, storage device 16, converter 22 and system configuration device 26 of FIG. 1. In FIG. 5 it is assumed that the connection 19 represents one channel and that there are four available channel interfaces on each of the host computer 12 and donor storage device 14. FIG. 5 also depicts a second host computer 12A with a connection 19A as a second channel between the host computer 12A and the donor storage device 14. In this particular configuration each of the host computers 12 and 12A have two available channels and, assuming each of the storage units has four available channel connections, each storage unit has two unused channels.
[0086] If such a configuration exists, then in accordance with another aspect of this invention, the composite storage device can be formed without any significant interruption of operations in the data center. The steps for performing such a non-disruptive transfer procedure begin with the connection of the target storage device [0086] 16 to available channels on the donor storage device 14 in the same fashion as previously indicated. However, the connection 19 remains intact while new connections 28 and 28A are established to the host computers 12 and 12A to unused paths or channels.
[0087] Once these connections are completed and the system configuration device [0087] 26 has properly configured the target storage device 16, a non-disruptive transfer procedure 300 shown in FIG. 6 begins. Step 301 verifies that all the appropriate steps have been completed to establish the appropriate configuration. If this is not done, step 301 diverts to step 302 to complete that procedure. When the setup is complete, step 301 diverts to procedure 303 that performs a swapping task so subsequent IO data transfer requests communicate through the paths 28 and 28A of FIG. 5 to the target storage device and do not communicate through the connections 19 and 19A. Step 304 in FIG. 6 represents the initiation of any of the previously or subsequently described data migration techniques.
[0088] Essentially it becomes necessary to interrupt the operation of the host computer [0088] 12 and any additional host computer, such as the host computer 12A, connected to the donor and target storage devices in order to swap the IO request from the donor storage device 14 to the target storage device 16. As known in the art, many data processing systems operate with an ability to run small batch programs with special instructions. One instruction for performing the swapping operation for an MVS system is:
[0089] S NDSDM, FROM=xxxx, TO=yyyy [SHARED=N/WTOR/CKPT] This command establishes a procedure and can be started from a main frame operator's console for each host computer attached to the donor storage device. The NDSDM field is a mnemonic indicating that a data migration swap is being implemented. The “FROM” parameter identifies the donor storage device [0089] 14 in FIG. 5; the “TO” parameter identifies the target storage device 16. The “SHARED” parameter can have three values. “N” indicates that there are no shared host computers. This would be applied in a system for a data center as shown in FIG. 1. If two host computers connect to the donor storage device, as shown in FIG. 5, the operator initiates the transfer by selecting either a WTOR or CKPT parameter in an effort to assure that the switch of data transfer requests from the donor storage device 14 to the target storage device 16 occurs in a timely and coordinated fashion.
[0090] Once the operator issues this command in step [0090] 305, as from the operator's console, step 306 determines whether the elements to be involved in the data migration are appropriate (i.e., have a valid configuration or syntax). Step 307 determines whether the “FROM” and “TO” volumes are valid. As previously indicated each storage device can comprise one or more logical volumes. Typically a data migration will be made on a volume-by-volume basis. Step 308 determines whether the “FROM” volume has any restrictions that preclude the data migration. If the volume is restricted, if the configuration is not valid or if either the “FROM” or “TO” volumes are not valid, step 309 terminates the task 303 and generates an appropriate error message.
[0091] Assuming these tests are met satisfactorily, control transfers to step [0091] 310 in which the operator, each of the computers 12 and 12A connected to the identified volumes suspends IO operations as by using standard MVS services to issue an IO ACTION STOP command. After this suspension occurs in step 310 the procedure 303 determines whether shared host computers are involved. If the data center has a configuration as shown in FIG. 1, step 311 determines that no sharing is involved and diverts control to step 312 that swaps the contents in the “FROM” and “TO” unit control blocks (UCB's). Then the system uses steps 313 and 314 to alter any duplicate volume identifications, typically by changing the identification of the volume in the donor storage device. This precludes multiple volumes with identical identifications. Once this is accomplished, step 315 reenables IO operations from the host computer 12. All subsequent data transfer requests are handled by the target storage device 16 over the connection 28.
[0092] If multiple host computers are involved, step [0092] 312 requires a prior synchronization or coordination to assure that the swap of the “FROM” and “TO” UCB's occurs in all host computers connected to the logical volume at the same time. This precludes a situation in which different host computers operate with both the donor storage device 14 and the target storage device 16. In one approach it may be possible to use services within the host computers to effect the synchronization. For example, check point services can synchronize events if an initialized control file on a separate device is shared by all the systems on-line to the FROM device. If this condition exists, the command issued in step 305 will incorporate the parameter SHARED CKPT and operation will divert from step 311 to step 320 that initiates the operation. The procedure in FIG. 7 then awaits an indication of synchronization in step 321. If it is received within a predetermined time, control passes to step 312 to effect the switch over. If the synchronization does not occur within a particular time, then step 321 diverts to step 322 that terminates the data migration procedure by reenabling or resuming I/O operations with the donor storage device 14. Generally an error message also appears ant the operator's console. Step 322 also continues the processing of data transfer requests with the donor storage device 14.
[0093] A second approach enables the transfer to occur manually from the system consoles. In this case SHARED=WTOR and steps [0093] 311 and 320 divert to step 323. If the SHARED parameter does not have the WTOR value, then, in the sequence shown in FIG. 7, a potential error exists because none of the accepted SHARED parameters has been received. Control diverts to step 322.
[0094] When the WTOR value for the SHARED parameter is decoded, step [0094] 324 issues a WTOR (Write To OperatoR) command that establishes the necessary synchronization and returns a reply. Again this command must issue from all host computers that are sharing the donor storage device 14 or a volume in that device and target storage device 16. Each system responds to the receipt of the WTOR command by issuing an IO ACTION STOP command and then by issuing a reply. When the replies from all the hosts indicate that everything is satisfactory, operations can continue. Step 325 then diverts control to step 312. Otherwise step 325 diverts control to step 322. Thus the steps immediately after step 310 determine whether SHARED host computers are involved in the data migration. If they are, the operations are synchronized before the data migration begins. Once IO operations are reenabled or resume, all the host computers involved with the storage devices thereafter direct all data transfer requests to the target storage device 16 and the donor storage device 14 is effectively removed from the network.
[0095] As will be apparent, this procedure can be completed within a matter of seconds and therefore does not interrupt the operation of the data center. Further it enables the transfer to occur without any of the data center preparation steps that have been necessary in order to effect an orderly transfer with other transfer techniques. [0095]
[0096] Another alternative embodiment of this invention that can enhance operations minimizes the impact of any power failure or other problem that might disrupt the data migration process. FIG. 8 depicts details of a system such as shown in FIG. 4 to particularly identify the new storage system with greater clarity. The new or target data storage unit [0096] 16 connects to a channel director 400 that in turn connects to one or more CPU/hosts 12 (not shown in FIG. 8). A common bus 401 then connects the cache memory 18 to disk directors. FIG. 8 depicts a first disk director 402 with disk drives 403; a second disk director 404 connects the bus 401 to disk drives 405. Each disk director controls operation of its attached array of disk drives. Each disk drive may comprise one or more logical volumes, one such volume 406 being shown as a component of one of the disks in the array 403.
[0097] As previously indicated the purpose of migrating data from the old data storage system [0097] 14 to the new data storage system 16 in FIG. 1 is generally to increase the size of the available storage. Consequently when it is desired to prevent any adverse response to a disruption during the data migration phase, each volume 403 will have a size sufficient to accept the data from the old data storage system 14 plus a volume data map depicted as a set of tracks 407 in the volume 406. The volume data map 407 will contain information as shown in FIG. 3 limited to that information corresponding to the particular disk drive and logical volume. Thus if the volume 406 corresponds to Volume 1 in Device X as shown in FIG. 2, the map 407 would contain information concerning tracks 1 through N as depicted in FIG. 2, but no information concerning any other volume or disk unit.
[0098] During data migration a data transfer request initiates an update data map/table procedure [0098] 116 in FIG. 3; the corresponding procedure 210 in FIG. 4 operates in the background modes. FIG. 9 depicts a detailed version of those procedures. In step 410 the procedure updates the “need migration” flag 60 for the corresponding device, volume and track in the data map 24 as previously indicated. In accordance with this aspect of the invention, another step 411 stages a write request to the volume data map 407. More specifically, if the NEED MIGRATION flag 60 in FIG. 2 were changed for any of the tracks in volume 1 in FIG. 2, step 411 would stage a write request to alter the corresponding NEED MIGRATION flag in the volume data map 407 of FIG. 8.
[0099] Thus in accordance with this aspect of the invention two copies of the data map table are maintained. The first is the complete data map table [0099] 24 in FIG. 2 that is stored in the cache 18 of FIG. 1. The second data map table is distributed among the volumes of disk arrays such as disk arrays 403 and 405. Consequently should any event occur, such as a power failure, that might cause the cache 18 to lose data or might corrupt data in the cache, it becomes a simple task to reconstruct the data map table 24 in the cache from the data that is permanently stored on the distributed volume data maps on the disks 403 and 405 and thereby continue the migration from the point at which the interruption occurred. This eliminates any need to migrate previously transferred valid data elements again.
[0100] As another alternative embodiment it is possible to modify the channel process [0100] 25 and migrate process 27 in FIG. 1 so that their respective operations are controlled in response to certain statistical information that can be developed during the migration process thereby to minimize response times to data transfer requests during the data migrations. In essence a copy subroutine runs in a background mode to transfer data track by track in sequence from a starting location until all the data is migrated. This operation corresponds to the operation of the migrate process 27 in FIG. 1. If the host processor 12 issues a data transfer request (DTR), including either a read or write command, and the corresponding data is not located in the target storage device 16, a foreground mode is established that causes the copy subroutine to transfer the requested data. This operation in this mode corresponds to the operation of the channel process 25 in FIG. 1. If a series of such data transfer requests establish a significant pattern of accesses to a localized area of the donor storage device 14, the parameters controlling the copy subroutine in the background mode are altered to shift the background copying to the localized area in which the statistically significant pattern of requests occurred.
[0101] FIG. 10 depicts apparatus in the form of registers that implement this invention shown in a memory block [0101] 200; as will be apparent, the registers may be located at different locations within the data storage system 16 as part of the migrate process 27.
[0102] In the memory block [0102] 200 a STATISTICAL BLOCK SIZE register 201 records a number of consecutive blocks that will define a localized area. This is a fixed number that typically will be installed from the system configuration device 26.
[0103] A STATISTICAL BLOCK CONTROL register [0103] 202 includes an identification (ID) field 203 and a DTR NO field 204. The ID field 203 contains the identification of the statistical block currently being evaluated; the DTR NO field 204 acts as a counter that alters each time a data transfer request (DTR) is made to that statistical block. A STATISTICAL BLOCK TRANSFER MIN register 205, also set to an initial value by the system configuration device 26, defines a user-generated minimum number of consecutive data transfer requests needed to initiate a copy program transfer. That is, register 205 establishes a threshold value that defines the boundary between random accesses that cause no change in the operation during the background mode and repeated access that produce the background mode operating change.
[0104] A COPY PROGRAM MIN BLOCK register [0104] 206 stores a minimum number of blocks, such as data tracks on a disk, that should be moved before any relocation of the copy program can occur. Specifically, the number in this register establishes a dead band or minimum delay that must expire before the copy program can be moved in response to a series of DTR requests to another area.
[0105] A COPY PROGRAM STARTING ADR register [0105] 207 stores the starting address for the copy program. Typically this would be initialized to a first track.
[0106] A COPY PROGRAM BLOCK ADR register [0106] 210 stores the current block address being transferred by the copy program. Typically this will be a track identification. In a sequential mode this register will be incremented or decremented to point to a successive address location after each transfer is complete.
[0107] A COPY PROGRAM BLOCKS register [0107] 211 counts the number of blocks that have been transferred after the COPY PROGRAM STARTING ADR register 207 is updated or initialized. This controls the relocation of the program. It is set to the value stored in the COPY PROGRAM MIN BLOCK register 206.
[0108] The remaining elements in the memory block [0108] 200 of FIG. 10 include a copy subroutine 212, a background mode controller 213, a foreground mode controller 214 and an interruption flag 215. As will now be described, the controllers 213 and 214 establish and control the areas from which the copy subroutine in block 211 transfers data from the donor storage device 14 to the target storage device 16. The interruption flag 215 controls that transfer between modes.
[0109] FIG. 11 depicts the various steps by which the background mode controller [0109] 213 and the copy subroutine 212 interact to transfer data on a track-by-track basis. Registers in the register set 200 are set to initial values in step 220. Then the program enters a loop comprising the remaining steps in FIG. 11 until all the NEED MIGRATION flags 60 of FIG. 2 are set using step 221 as a loop control. As a first action in the loop, step 222 determines whether the STATISTICAL BLOCK INTERRUPTION flag 215 is set indicating that the copy subroutine 212 in FIG. 10 needs to be relocated. If that condition exists, control diverts to step 223 that updates the copy program parameters in registers 207 and 210 thereby to relocate the position of the copy subroutine to another track.
[0110] If the STATISTICAL BLOCK INTERRUPTION flag [0110] 206 is not set or after the copy program parameters are updated in step 223, step 224 determines whether the NEED MIGRATION flag 60 for the new track is set. If it is, step 225 copies the track, or other block of data elements, from the donor or first data storage device 14 to the target or second storage device 16. In step 226 the system clears the NEED MIGRATION flag 60 for the corresponding track position. Steps 225 and 226 form the copy subroutine 212. When the NEED MIGRATION flag 60 for a track is not set, the block has been previously transferred so control diverts from step 224 directly to step 227.
[0111] Step [0111] 227 increments the value in the COPY PROGRAM BLOCK ADR register 210 and step 228 increments the COPY PROGRAM BLOCKS register 211. Thus, the background mode controller 211 in FIG. 11 will, absent the setting of the STATISTICAL BLOCK INTERRUPTION flag 215, copy the tracks or data blocks from the donor storage device 14 to the target storage device 16 in an ordered sequence. Moreover the transfers are non-redundant because once a data block is transferred to the target storage device 16, all further DTR commands for a data element in that block are handled exclusively by the target storage device 16.
[0112] FIG. 12 depicts the operation of the foreground mode controller [0112] 214 that controls the response to a DTR (data transfer request) command, makes any necessary transfer and determines whether the accesses define a significant pattern that warrants setting the STATISTICAL BLOCK INTERRUPTION flag 215. As part of an initialization procedure 230 in FIG. 12, the system will initialize (1) the statistical block size, (2) statistical block control ID and DTR NO values, (3) the copy program minimum block size and (4) the copy program starting position in the corresponding registers in block 200 of FIG. 10. Step 231 waits for a host command. When a host command is received, step 232 determines whether that command is a data transfer request (DTR) command. If not, step 232 branches to step 233 where the command is processed. Thereafter the system awaits the receipt of a next command at step 231.
[0113] Each time a DTR command is received, control branches from step [0113] 232 to step 234 to determine whether the target storage device 16 contains the requested data element. If it does, step 235 transfers the data element to the host computer in accordance with the DTR command. There is no requirement for any communication with the donor storage device 14. The response time then is the response time of the target storage device 16.
[0114] If the requested data element is not in the target storage device [0114] 16, migration is necessary. Step 236 interrupts the operation of the background mode controller 213 in FIG. 11 to transfer a track or other block containing the data element identified by the DTR command in step 237. In essence step 237 calls the copy subroutine 212 in FIG. 10 and supplies the arguments or parameters necessary to effect the transfer.
[0115] Next there is a determination of whether the access has established a significant pattern. In this particular embodiment, step [0115] 238 compares the statistical block identification associated with the DTR command with the ID field 203 in the STATISTICAL BLOCK CONTROL register 202. If the numbers are not the same, step 240 transfers control to step 241 that replaces the contents of the ID field 203 with the corresponding statistical block identification for the DTR command. Control then returns to await the next host command at step 231. Thus the foreground controller 214 follows control path through step 241 in response to random DTR accesses.
[0116] If the identification is the same as the identification in the field [0116] 203, step 240 branches to step 242. This branch represents an indication of localized access for this DTR command is to an area defined by the statistical block size in register 201 of FIG. 10. In step 242 the contents of the DTR NO field 204 are incremented. If the number in the field 204 is not above a threshold, step 243 diverts to loop back to await the next host command at step 231. If the number is above the threshold, indicating a significant pattern of accesses to a localized area, step 243 diverts to step 244 that compares the minimum copy block size in register 206 with the number of transfers that have occurred as obtained from register 211. If the minimum block size has not been satisfied, step 245 diverts back to step 231 to wait for the next host command. Thus no relocation of the copy subroutine 212 will occur until the minimum number of transfers has been made from an existing localized area. Once that minimum is reached, step 245 diverts to step 246 that sets the interruption flag 215. Step 215 also generates new copy program parameters and then restores the background mode of the copy procedure.
[0117] When the interruption flag [0117] 215 is set and the background mode controller 213 in FIG. 11 enhances a next iteration, step 222 determines that the interruption flag 215 is set and diverts control to step 223 to update the copy subroutine parameters or arguments with the new copy program parameters generated in step 246 of FIG. 12. This will relocate the copy subroutine to the statistical block corresponding to the localized area accessed by the sequential DTR commands. That is, the copy subroutine begins to transfer blocks or tracks sequentially from that initial operation at a first block or track in the new statistical block or localized area that exhibits the significant access pattern and continues transfers from that localized area until at least the minimum number of blocks have been transferred. The sequential transfer then continues until the DTR commands establish a statistically significant pattern of accesses within another statistical block.
[0118] To summarize the operation of this invention, the copy subroutine [0118] 212, essentially comprising steps 225 and 226 in FIG. 10, operates in response to calls from the background mode controller 213 of FIG. 11 to move data on a track-by-track, or other data block-by-data block basis, from the donor storage device 14 to the target storage device 16. If an occasional or random access is requested by a DTR command, the foreground mode controller 214 in FIG. 12 interrupts the operation of the background mode controller 213 in FIG. 11 to transfer the track or data block containing the requested data element to the target storage device 16. Thereafter control passes back to continue the copy subroutine calls from the background mode controller 213 according to the original sequence.
[0119] If, however, successive DTR commands cause the foreground mode controller [0119] 214 to access data blocks concentrated in a particular statistical block, the system predicts that further requests will be made to that statistical block. The foreground mode controller 214 in FIG. 12 then alters the arguments used by the background mode controller 213 in FIG. 11 to shift the operation of the background mode controller 213 to the statistical block receiving the repeated DTR requests. The minimum block size prevents another shift of that operation until such time as a minimum number of data blocks or tracks have been transferred. This process continues then until all the NEED MIGRATION FLAGS 60 have been cleared indicating that all the data has migrated. When this occurs, step 221 in FIG. 11 transfers control to a DONE procedure 247 that causes appropriate completion messages to be generated.
[0120] FIG. 13 depicts, in graphical form, the comparison of host computer response time to DTR commands as a function of data migration time. Graph [0120] 250 represents a typical response scenario for random access requests. The graph indicates that initially there will be maximum response times and that these response times will decrease to normal response times in a substantially linear fashion as the migration continues. The maximum response time represents the time required to complete a transfer from the donor storage device 14. Essentially and intuitively, as more data migrates to the target storage device 16 the more likely it is that a DTR command will access data already in the target storage device 16 so the response time will be that of the target storage device 16.
[0121] Graph [0121] 251 depicts an optimal data migration response curve. It is assumed for this curve that it would be possible to predict with certainty the locations accessed by the pattern of DTR commands. Relevant data is transferred initially so that the response time drops rapidly to the minimum value.
[0122] In actual practice it is not always possible to make such a prediction. Graph [0122] 252 depicts a typically observed response time pattern realized with this invention. It has been found that this invention significantly reduces the response times as a function of data migration over the graph 250. In many cases the actual response time graph approach the optimal graph 251.
[0123] Consequently this method and apparatus disclosed in FIGS. 10 through 12 enables the efficient transfer of data from one storage device to another in concert with other external operations as represented by DTR commands. The transfers are particularly efficient in a data migration scenario where the data migration occurs in a transparent or parallel mode and provides a minimal impact on response times to DTR commands. [0123]
[0124] Although the present invention is preferably implemented in software, this is not a limitation of the present invention as those well know in the art can appreciate that the present invention can be implemented in hardware of in various combinations of hardware and software, without departing from the scope of the invention. Modifications and substitutions by one of ordinary skill in the art are considered to be within the scope of the present invention which is not to be limited except by the claims which follow. [0124]
[0125] This invention has been disclosed in terms of certain embodiments. It will be apparent that many modifications can be made to the disclosed apparatus without departing from the invention. Therefore, it is the intent of the appended claims to cover all such variations and modifications as come within the true spirit and scope of this invention. [0125]
权利要求:
Claims (60)
[1" id="US-20010001870-A1-CLM-00001] 1. Data migration apparatus for use in a data processing system including an existing storage device for data elements connected to a host system that performs data element transfers with the existing storage device in response to data transfer requests and a replacement storage device, said data migration apparatus comprising:
A) connection means for forming the existing and replacement storage devices into a composite storage device with a path to the host system thereby enabling the composite storage device to respond to data transfer requests from the host system; and
B) data migration means for migrating data from the existing storage device to the replacement storage device concurrently with data transfer requests from the host system.
[2" id="US-20010001870-A1-CLM-00002] 2. Data migration apparatus as recited in
claim 1 wherein said data migration means comprises:
i) first transfer means for producing data transfers between the host system and the composite storage device in response to data transfer requests from the host system;
ii) second transfer means for producing data transfers from the existing storage device to the replacement storage device; and
iii) control means in the replacement storage device for controlling said first and second transfer means until all the data elements have migrated from the existing storage device to the replacement storage device whereupon thereafter all host system transfer requests are processed in the replacement storage device.
[3" id="US-20010001870-A1-CLM-00003] 3. Data migration apparatus as recited in
claim 2 wherein the existing storage device stores data elements in sequentially addressed locations defining a data block and wherein said first and second transfer means include means for transferring, during each transfer, a data block from the existing storage device to the replacement storage device.
[4" id="US-20010001870-A1-CLM-00004] 4. Data migration apparatus as recited in
claim 3 additionally comprising a table that is initialized to indicate that all data blocks in the existing storage device require data migration, said control means updating said table in response to each data block migration thereby to terminate the data migration when all data elements have migrated to the replacement storage device.
[5" id="US-20010001870-A1-CLM-00005] 5. Data migration apparatus as recited in
claim 4 wherein the replacement volume comprises at least one logical volume and said table includes a portion allocated to each logical volume, said apparatus additionally comprising means for transferring each table portion to the replacement storage device periodically.
[6" id="US-20010001870-A1-CLM-00006] 6. Data migration apparatus as recited in
claim 4 wherein the replacement volume comprises a plurality of logical volumes and said table includes a portion allocated to each logical volume, said apparatus additionally comprising means for transferring each table portion to a corresponding logical volume in the replacement storage device after each migration of a data block from logical volume in the existing storage device to the corresponding logical volume in the replacement storage device.
[7" id="US-20010001870-A1-CLM-00007] 7. Data migration apparatus as recited in
claim 4 wherein said connection means includes:
i) means associated with said first transfer means for establishing a path between the host system and the replacement storage device, and
ii) means associated with said second transfer means for establishing a second path between the existing and replacement storage devices.
[8" id="US-20010001870-A1-CLM-00008] 8. Data migration apparatus as recited in
claim 4 wherein said connection means includes:
i) means associated with said first transfer means for establishing a first path between the replacement storage device and the host system in lieu of the path between the existing storage device and the host system, and
ii) means associated with said second transfer means for establishing a path between the existing and replacement storage devices whereby all data transfers in response to data transfer requests from the host system occur with the replacement storage device.
[9" id="US-20010001870-A1-CLM-00009] 9. Data migration apparatus as recited in
claim 4 wherein the host system includes a plurality of input-output connections available for connection to storage devices, wherein the existing storage device connects to a first input-output connection and wherein said connection means includes:
i) means associated with said first transfer means for establishing a first path between the replacement storage device and a second host system input-output connection whereby the first path is in parallel with the connection between the host system and the existing storage device, and
ii) means associated with said second transfer means for establishing a second path between the existing and replacement storage devices, said control means effecting the data migration after the host system reroutes input-output operations to the other of the host system input-output connections.
[10" id="US-20010001870-A1-CLM-00010] 10. Data migration apparatus as recited in
claim 4 wherein said first and second transfer means effect transfers by controlling the operation of a copy subroutine for transferring data blocks from the existing storage device to the replacement storage device in response to control parameters, said first transfer means including a foreground mode controller for establishing first values for the control parameters in response to data transfer requests to data blocks located only in the existing storage device and said second transfer means including a background mode controller for establishing second values of the control parameters.
[11" id="US-20010001870-A1-CLM-00011] 11. Data migration apparatus as recited in
claim 10 additionally comprising:
A) means for determining the existence of a significant pattern of accesses to the existing storage device controlled by said foreground mode controller; and
B) means for altering the control parameters from said background mode controller in response to the occurrence of the significant pattern.
[12" id="US-20010001870-A1-CLM-00012] 12. Data migration apparatus as recited in
claim 11 wherein said table in said control means includes a flag corresponding to each data block having a first value indicating that the corresponding data block is located only in the existing storage device and a second value indicating that the corresponding data block has migrated to the replacement storage device and wherein one of the control parameters is the address of a data block location, said copy subroutine including:
i) means for migrating a data block from the existing storage device to the replacement storage device when the corresponding flag is at the first value; and
ii) means for establishing the second flag value in response to the migration.
[13" id="US-20010001870-A1-CLM-00013] 13. Data migration apparatus as recited in
claim 12 wherein said means for determining the existence of a significant pattern includes:
i) means for defining identifiable statistical blocks comprising a predetermined number of contiguous data blocks; and
ii) means for counting successive data transfer requests initiated by said foreground controller that access a given statistical block for a predetermined number of data transfers; said altering means includes means for setting an interruption flag; and
said background mode controller includes means responsive to the interruption flag for loading the address of the statistical block as a control parameter.
[14" id="US-20010001870-A1-CLM-00014] 14. Data migration apparatus as recited in
claim 13 additionally comprising an interruption flag:
said means for determining the existence of a significant pattern additionally including means for setting the interruption flag; and
said background mode controller includes means responsive to the interruption flag for loading the address of the statistical block; and
said altering means comprises:
i) means for monitoring the number of data transfers by the copy subroutine in response to the operation of said background controller; and
ii) means for setting the interruption flag only after a predetermined number of iterations of the copy subroutine have been performed in response to the operation of said background controller.
[15" id="US-20010001870-A1-CLM-00015] 15. Data migration apparatus as recited in
claim 10 wherein said connection means includes:
i) means associated with said first transfer means for establishing a path between the host system and the replacement storage device, and
ii) means associated with said second transfer means for establishing a second path between the existing and replacement storage devices.
[16" id="US-20010001870-A1-CLM-00016] 16. Data migration apparatus as recited in
claim 10 wherein said connection means includes:
i) means associated with said first transfer means for establishing a first path between the replacement storage device and the host system in lieu of the path between the existing storage device and the host system, and
ii) means associated with said second transfer means for establishing a path between the existing and replacement storage devices whereby all data transfers in response to data transfer requests from the host system occur with the replacement storage device.
[17" id="US-20010001870-A1-CLM-00017] 17. Data migration apparatus as recited in
claim 10 wherein the host system includes a plurality of input-output connections available for connection to storage devices, wherein the existing storage device connects to a first input-output connection and wherein said connection means includes:
i) means associated with said first transfer means for establishing a first path between the replacement storage device and a second host system input-output connection whereby the first path is in parallel with the connection between the host system and the existing storage device, and
ii) means associated with said second transfer means for establishing a second path between the existing and replacement storage devices, said control means effecting the data migration after the host system reroutes input-output operations to the other of the host system input-output connections.
[18" id="US-20010001870-A1-CLM-00018] 18. Data migration apparatus as recited in
claim 10 wherein the replacement volume comprises at least one logical volume and said table includes a portion allocated to each logical volume, said apparatus additionally comprising means for transferring each table portion to the replacement storage device periodically.
[19" id="US-20010001870-A1-CLM-00019] 19. Data migration apparatus as recited in
claim 10 wherein the replacement volume comprises a plurality of logical volumes and said table includes a portion allocated to each logical volume, said apparatus additionally comprising means for transferring each table portion to a corresponding logical volume in the replacement storage device after each migration of a data block from logical volume in the existing storage device to the corresponding logical volume in the replacement storage device.
[20" id="US-20010001870-A1-CLM-00020] 20. Data migration apparatus as recited in
claim 19 wherein said connection means includes:
i) means associated with said first transfer means for establishing a path between the host system and the replacement storage device, and
ii) means associated with said second transfer means for establishing a second path between the existing and replacement storage devices.
[21" id="US-20010001870-A1-CLM-00021] 21. Data migration apparatus as recited in
claim 19 wherein said connection means includes:
i) means associated with said first transfer means for establishing a first path between the replacement storage device and the host system in lieu of the path between the existing storage device and the host system, and
ii) means associated with said second transfer means for establishing a path between the existing and replacement storage devices whereby all data transfers in response to data transfer requests from the host system occur with the replacement storage device.
[22" id="US-20010001870-A1-CLM-00022] 22. Data migration apparatus as recited in
claim 19 wherein the host system includes a plurality of input-output connections available for connection to storage devices, wherein the existing storage device connects to a first input-output connections and wherein said connection means includes:
i) means associated with said first transfer means for establishing a first path between the replacement storage device and a second host system input-output connection whereby the first path is in parallel with the connection between the host system and the existing storage device, and
ii) means associated with said second transfer means for establishing a second path between the existing and replacement storage devices, said control means effecting the data migration after the host system reroutes input-output operations to the other of the host system input-output connections.
[23" id="US-20010001870-A1-CLM-00023] 23. Data migration apparatus as recited in
claim 1 wherein said data migration means includes:
i) means for identifying data elements that have migrated from the existing storage device to the replacement storage device, and
ii) means for copying said identification means to said replacement storage device periodically.
[24" id="US-20010001870-A1-CLM-00024] 24. Data migration apparatus as recited in
claim 23 wherein data elements are stored in the storage devices in data blocks and said identification means includes a table that identifies each data block to be migrated, said data migration means updating said table with each data block migration to the replacement storage device.
[25" id="US-20010001870-A1-CLM-00025] 25. Data migration apparatus as recited in
claim 1 wherein said data migration means includes:
i) a copy subroutine means for transferring data elements from the existing storage device to the replacement storage device in accordance with control parameters,
ii) foreground mode controller means for calling said copy subroutine means by generating corresponding control parameters in response to a data transfer request from the host system for a data element that is only in the existing storage device,
iii) background mode controller means for calling said copy subroutine means by generating corresponding control parameters when said foreground mode controller is idle, and
iv) pattern recognition means for altering the control parameters generated by said background controller means in response to the detection of a pattern of accesses by said foreground mode controller means.
[26" id="US-20010001870-A1-CLM-00026] 26. Data migration apparatus as recited in
claim 1 wherein said data migration means includes:
i) means for establishing a first transfer path between the replacement storage device and the host system in lieu of the connection between the existing storage device and the host system,
ii) means for establishing a second transfer path between the existing and replacement storage devices,
iii) means for enabling the operation of said data migration means after the establishment of said first and second transfer paths.
[27" id="US-20010001870-A1-CLM-00027] 27. Data migration apparatus as recited in
claim 1 wherein each of the host system and replacement storage device includes plural connection means for communicating therebetween in response to data transfer requests, one of connection means on the host system being connected to the existing data storage device, said data migration means includes:
i) means for establishing a first transfer path between the replacement storage device and another of the connection means on the host system,
ii) means for establishing a second transfer path between the existing and replacement storage devices,
iii) means for enabling the operation of said data migration means after the establishment of said first and second transfer paths and the modification of the host system to route data transfer requests to the other of the connection means thereon.
[28" id="US-20010001870-A1-CLM-00028] 28. A method for migrating data elements from an existing storage device to as replacement storage device in a data processing system that additionally includes a host system that performs data element transfers with the existing storage device in response to data transfer requests, said method comprising:
A) connecting the existing and replacement storage devices into a composite storage device with a path to the host system thereby enabling the composite storage device to respond to data transfer requests from the host system; and
B) migrating data elements from the existing storage device to the replacement storage device concurrently with data transfer requests from the host system.
[29" id="US-20010001870-A1-CLM-00029] 29. A method as recited in
claim 28 wherein said step of migrating comprises:
i) performing first transfers between the host system and the composite storage device in response to data transfer requests from the host system;
ii) performing second transfers from the existing storage device to the replacement storage device; and
iii) controlling said first and second transfers until all the data elements have migrated from the existing storage device to the replacement storage device whereupon thereafter all host system transfer requests are processed in the replacement storage device.
[30" id="US-20010001870-A1-CLM-00030] 30. A method as recited in
claim 29 wherein the existing storage device stores data elements in sequentially addressed locations defining a data block and wherein each of said first and second transferring steps transfer a data block from the existing storage device to the replacement storage device.
[31" id="US-20010001870-A1-CLM-00031] 31. A method as recited in
claim 30 additionally comprising initializing a table to indicate that all data blocks in the existing storage device require data migration and updating said table in response to each data block migration thereby to terminate the data migration when all data elements have migrated to the replacement storage device.
[32" id="US-20010001870-A1-CLM-00032] 32. A method as recited in
claim 31 wherein the replacement volume comprises at least one logical volume and said table includes a portion allocated to each logical volume, said method comprising the additional step of transferring each table portion to the replacement storage device periodically.
[33" id="US-20010001870-A1-CLM-00033] 33. A method as recited in
claim 31 wherein the replacement volume comprises a plurality of logical volumes and said table includes a portion allocated to each logical volume, said method additionally comprising the step of transferring each table portion to a corresponding logical volume in the replacement storage device after each migration of a data block from logical volume in the existing storage device to the corresponding logical volume in the replacement storage device.
[34" id="US-20010001870-A1-CLM-00034] 34. A method as recited in
claim 31 wherein said step of connecting the existing and replacement storage devices includes:
i) establishing a first path between the host system and the replacement storage device, and
ii) establishing a second path between the existing and replacement storage devices.
[35" id="US-20010001870-A1-CLM-00035] 35. A method as recited in
claim 31 wherein said step of connecting the existing and replacement storage devices includes:
i) establishing a first path between the replacement storage device and the host system in lieu of the path between the existing storage device and the host system, and
ii) establishing a second path between the existing and replacement storage devices whereby all data transfers in response to data transfer requests from the host system occur with the replacement storage device.
[36" id="US-20010001870-A1-CLM-00036] 36. A method as recited in
claim 31 wherein the host system includes a plurality of input-output connections available for connection to storage devices, wherein the existing storage device connects to a first input-output connection and wherein said step of connecting includes:
i) establishing a first path between the replacement storage device and a second host system input-output connection whereby the first path is in parallel with the connection between the host system and the existing storage device,
ii) establishing a second path between the existing and replacement storage devices, and
iii) rerouting host system reroutes input-output operations to the second host system input-output connection.
[37" id="US-20010001870-A1-CLM-00037] 37. A method as recited in
claim 31 wherein said first and second transfers are effected by controlling the operation of a copy subroutine for transferring data blocks from the existing storage device to the replacement storage device in response to control parameters, said first transfer including foreground mode control for establishing first values for the control parameters in response to data transfer requests to data blocks located only in the existing storage device and said second transfer including background mode control for establishing second values of the control parameters.
[38" id="US-20010001870-A1-CLM-00038] 38. A method as recited in
claim 37 additionally comprising:
A) determining the existence of a significant pattern of accesses to the existing storage device controlled by said foreground mode controller; and
B) altering the control parameters from said background mode controller in response to the occurrence of the significant pattern.
[39" id="US-20010001870-A1-CLM-00039] 39. A method as recited in
claim 38 wherein the table includes a flag corresponding to each data block having a first value indicating that the corresponding data block is located only in the existing storage device and a second value indicating that the corresponding data block has migrated to the replacement storage device and wherein one of the control parameters is the address of a data block location, said copy subroutine including the steps of:
i) migrating a data block from the existing storage device to the replacement storage device when the corresponding flag is at the first value; and
ii) establishing the second flag value in response to the migration.
[40" id="US-20010001870-A1-CLM-00040] 40. A method as recited in
claim 39 wherein said step of determining the existence of a significant pattern includes:
i) defining identifiable statistical blocks comprising a predetermined number of contiguous data blocks; and
ii) counting successive data transfer requests initiated by said foreground controller that access a given statistical block for a predetermined number of data transfers;
said step of altering includes setting an interruption flag; and
said background mode control includes responding to the interruption flag being set by loading the address of the statistical block as a control parameter.
[41" id="US-20010001870-A1-CLM-00041] 41. A method as recited in
claim 40 wherein:
said step of determining the existence of a significant pattern additionally includes setting the interruption flag; and
said step of background mode control includes responding to the interruption flag by loading the address of the statistical block; and
said step of altering comprises:
i) monitoring the number of data transfers by the copy subroutine in response to said background mode control; and
ii) setting the interruption flag only after a predetermined number of iterations of the copy subroutine have been performed in response to said background mode control.
[42" id="US-20010001870-A1-CLM-00042] 42. A method as recited in
claim 37 wherein said step of connecting includes:
i) establishing a path between the host system and the replacement storage device, and
ii) establishing a second path between the existing and replacement storage devices.
[43" id="US-20010001870-A1-CLM-00043] 43. A method as recited in
claim 37 wherein said step of connecting includes:
i) establishing a first path between the replacement storage device and the host system in lieu of the path between the existing storage device and the host system, and
ii) establishing a path between the existing and replacement storage devices whereby all data transfers in response to data transfer requests from the host system occur with the replacement storage device.
[44" id="US-20010001870-A1-CLM-00044] 44. A method as recited in
claim 37 wherein the host system includes a plurality of input-output connections available for connection to storage devices, wherein the existing storage device connects to a first input-output connection and wherein said step of connecting means includes:
i) establishing a first path between the replacement storage device and a second host system input-output connection whereby the first path is in parallel with the connection between the host system and the existing storage device,
ii) establishing a second path between the existing and replacement storage devices, and
iii) rerouting input-output operations to the other of the host system input-output connections.
[45" id="US-20010001870-A1-CLM-00045] 45. A method as recited in
claim 37 wherein the replacement volume comprises at least one logical volume and the table includes a portion allocated to each logical volume, said method additionally comprising the step of transferring each table portion to the replacement storage device periodically.
[46" id="US-20010001870-A1-CLM-00046] 46. A method as recited in
claim 37 wherein the replacement volume comprises a plurality of logical volumes and the table includes a portion allocated to each logical volume, said method additionally comprising the step of transferring each table portion to a corresponding logical volume in the replacement storage device after each migration of a data block from logical volume in the existing storage device to the corresponding logical volume in the replacement storage device.
[47" id="US-20010001870-A1-CLM-00047] 47. A method as recited in
claim 37 wherein said step of connecting includes:
i) establishing a path between the host system and the replacement storage device, and
ii) establishing a second path between the existing and replacement storage devices.
[48" id="US-20010001870-A1-CLM-00048] 48. A method as recited in
claim 46 wherein said step of connecting includes:
i) establishing a first path between the replacement storage device and the host system in lieu of the path between the existing storage device and the host system, and
ii) establishing a second path between the existing and replacement storage devices whereby all data transfers in response to data transfer requests from the host system occur with the replacement storage device.
[49" id="US-20010001870-A1-CLM-00049] 49. A method as recited in
claim 46 wherein the host system includes a plurality of input-output connections available for connection to storage devices, wherein the existing storage device connects to a first input-output connection and wherein said step of connecting includes:
i) establishing a first path between from the replacement storage device and a second host system input-output connection whereby the first path is in parallel with the connection between the host system and the existing storage device,
ii) establishing a second path between the existing and replacement storage devices, and
iii) rerouting input-output operations to the other of the host system input-output connections.
[50" id="US-20010001870-A1-CLM-00050] 50. A method as recited in
claim 28 wherein said data migration means includes:
i) means for identifying data elements that have migrated from the existing storage device to the replacement storage device, and
ii) means for periodically copying said identification means to said replacement storage device.
[51" id="US-20010001870-A1-CLM-00051] 51. A method as recited in
claim 23 wherein data elements are stored in the storage devices in data blocks and said identification means includes a table that identifies each data block to be migrated, said method comprising the step of updating the table each time a data block migrates to the replacement storage device.
[52" id="US-20010001870-A1-CLM-00052] 52. A method as recited in
claim 28 wherein a copy subroutine transfers data elements from the existing storage device to the replacement storage device in accordance with control parameters, said method comprising the additional steps of:
i) calling the copy subroutine in a foreground mode by generating corresponding control parameters in response to a data transfer request from the host system for a data element that is only in the existing storage device,
ii) calling the copy subroutine in a background mode by generating corresponding control parameters in the absence of a foreground mode call, and
iii) altering the control parameters generated by a background mode in response to the detection of a pattern of calls in the foreground mode.
[53" id="US-20010001870-A1-CLM-00053] 53. A method as recited in
claim 28 additionally comprising the steps of:
i) establishing a first transfer path between the replacement storage device and the host system in lieu of the connection between the existing storage device and the host system,
ii) establishing a second transfer path between the existing and replacement storage devices,
iii) enabling the migration of data elements after the establishment of the first and second transfer paths.
[54" id="US-20010001870-A1-CLM-00054] 54. A method as recited in
claim 28 wherein each of the host system and replacement storage device include plural connection means for communicating therebetween in response to data transfer requests, a first connection means on the host system being connected to the existing data storage device, said method comprising the additional steps of:
i) establishing a first transfer path between the replacement storage device and a second connection means on the host system,
ii) establishing a second transfer path between the existing and replacement storage devices,
iii) modifying the operation of the host system to route data transfer requests from the first connection means to the second connection means thereby enabling the migration of data elements after the establishment of said first and second transfer paths.
[55" id="US-20010001870-A1-CLM-00055] 55. Data migration apparatus for use in a data processing system including an existing storage device for data elements connected to a host system that performs data element transfers with the existing storage device in response to data transfer requests and a replacement storage device, said data migration apparatus comprising:
A) connections for forming the existing and replacement storage devices into a composite storage device with a path to the host system thereby enabling the composite storage device to respond to data transfer requests from the host system; and
B) a data migration system that migrates data from the existing storage device to the replacement storage device concurrently with data transfer requests from the host system.
[56" id="US-20010001870-A1-CLM-00056] 56. Data migration apparatus as recited in
claim 55 wherein said data migration system includes:
i) an identifier that stores the identity of data elements that have migrated from the existing storage device to the replacement storage device, and
ii) a copier that replicates the contents of said identifier to said replacement storage device periodically.
[57" id="US-20010001870-A1-CLM-00057] 57. Data migration apparatus as recited in
claim 56 wherein data elements are stored in the storage devices in data blocks and said identifier includes a table that identifies each data block to be migrated and wherein said data migration system updates said table with each data block migration to the replacement storage device.
[58" id="US-20010001870-A1-CLM-00058] 58. Data migration apparatus as recited in
claim 1 wherein said data migration system includes:
i) a copy subroutine for transferring data elements from the existing storage device to the replacement storage device in accordance with control parameters,
ii) a foreground mode controller that calls said copy subroutine by generating corresponding control parameters in response to a data transfer request from the host system for a data element that is only in the existing storage device,
iii) a background mode controller for calling said copy subroutine means by generating corresponding control parameters when said foreground mode controller is idle, and
iv) a pattern recognition system for altering the control parameters generated by said background mode controller in response to the detection of a pattern of accesses by said foreground mode controller.
[59" id="US-20010001870-A1-CLM-00059] 59. Data migration apparatus as recited in
claim 55 wherein said data migration system includes:
i) a first transfer path established between the replacement storage device and the host system in lieu of the connection between the existing storage device and the host system,
ii) a second transfer path established between the existing and replacement storage devices,
iii) a controller that enables the operation of said data migration system after the establishment of said first and second transfer paths.
[60" id="US-20010001870-A1-CLM-00060] 60. Data migration apparatus as recited in
claim 55 wherein each of the host system and replacement storage device includes plural connections for communicating therebetween in response to data transfer requests, a first connection on the host system being connected to the existing data storage device, said data migration system includes:
i) a first transfer path established between the replacement storage device and a second connection on the host system,
ii) a second transfer path established between the existing and replacement storage devices,
iii) a controller that enables the operation of said data migration system after the establishment of said first and second transfer paths and the modification of the host system to route data transfer requests to the second connection means.
类似技术:
公开号 | 公开日 | 专利标题
US6240486B1|2001-05-29|System and method for on-line, real time, data migration
JP3364572B2|2003-01-08|Data processing system having multi-path I / O request mechanism and queue status updating method
US6950915B2|2005-09-27|Data storage subsystem
US6883073B2|2005-04-19|Virtualized volume snapshot formation method
US6912632B2|2005-06-28|Storage system, storage system control method, and storage medium having program recorded thereon
EP1313016B1|2011-02-16|Dynamic interconnection of storage devices
US6775790B2|2004-08-10|Distributed fine-grained enhancements for distributed table driven I/O mapping
EP1148416B1|2008-03-19|Computer system and snapshot data management method
US6122685A|2000-09-19|System for improving the performance of a disk storage device by reconfiguring a logical volume of data in response to the type of operations being performed
JP2004127294A|2004-04-22|Virtual storage system and its operation method
EP1313017B1|2006-07-05|Reversing a communication path between storage devices
US20020188592A1|2002-12-12|Outboard data storage management system and method
US6510491B1|2003-01-21|System and method for accomplishing data storage migration between raid levels
US6260109B1|2001-07-10|Method and apparatus for providing logical devices spanning several physical volumes
US20080301777A1|2008-12-04|Hot standby server system
EP1313018B1|2005-04-20|Hierarchical approach to identifying changing device characteristics
US7117249B1|2006-10-03|Computer system and data sharing method between computers
US20100082934A1|2010-04-01|Computer system and storage system
JP2001100930A|2001-04-13|Mirror disk controller
US6282672B1|2001-08-28|System for simultaneously executing any one of plurality of applications that must be executed using static data not modified by another computer program
EP1507207B1|2016-03-09|Hierarchical approach to identifying changing device characteristics
JPH10240632A|1998-09-11|On-line exchange method for external storage device
EP1684178B9|2008-07-16|Reversing a communication path between storage devices
JPH0736761A|1995-02-07|On-line copying processing method with high reliability for external memory device
同族专利:
公开号 | 公开日
EP0789877B1|2002-07-10|
JP3645270B2|2005-05-11|
WO1997009676A1|1997-03-13|
US6108748A|2000-08-22|
DE69622253D1|2002-08-14|
US20020004890A1|2002-01-10|
US6356977B2|2002-03-12|
EP1160654B1|2006-07-05|
DE69622253T2|2003-02-13|
US6240486B1|2001-05-29|
EP1160654A1|2001-12-05|
EP0789877A4|1998-11-11|
DE69636330T2|2007-06-06|
EP0789877A1|1997-08-20|
DE69636330D1|2006-08-17|
US6598134B2|2003-07-22|
JPH10508967A|1998-09-02|
US5896548A|1999-04-20|
US5680640A|1997-10-21|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题
EP1291755A2|2001-05-25|2003-03-12|Hitachi Ltd.|Storage system, a method of file data back up and a method of copying of file data|
US20030110237A1|2001-12-06|2003-06-12|Hitachi, Ltd.|Methods of migrating data between storage apparatuses|
US6698017B1|1999-07-16|2004-02-24|Nortel Networks Limited|Software migration on an active processing element|
US20040049631A1|2002-09-05|2004-03-11|Nalawadi Rajeev K.|Method and apparatus for handling data transfers|
US20040260873A1|2003-06-17|2004-12-23|Hitachi, Ltd.|Method and apparatus for managing replication volumes|
US20050060506A1|2003-09-16|2005-03-17|Seiichi Higaki|Storage system and storage control device|
US20050114018A1|2003-03-25|2005-05-26|Mitsubishi Denki Kabushiki Kaisha|Map data processing unit|
US20050198436A1|2004-03-05|2005-09-08|Junichi Iida|Storage control system and method|
US20050216532A1|2004-03-24|2005-09-29|Lallier John C|System and method for file migration|
US6952757B2|2002-08-29|2005-10-04|International Business Machines Corporation|Method, system, and program for managing storage units in storage pools|
US6954768B2|2002-08-29|2005-10-11|International Business Machines Corporation|Method, system, and article of manufacture for managing storage pools|
US6954831B2|2002-08-29|2005-10-11|International Business Machines Corporation|Method, system, and article of manufacture for borrowing physical volumes|
US20050257003A1|2004-05-14|2005-11-17|Hitachi, Ltd.|Storage system managing method and computer system|
US6985916B2|2002-08-29|2006-01-10|International Business Machines Corporation|Method, system, and article of manufacture for returning physical volumes|
US20060064536A1|2004-07-21|2006-03-23|Tinker Jeffrey L|Distributed storage architecture based on block map caching and VFS stackable file system modules|
US7103731B2|2002-08-29|2006-09-05|International Business Machines Corporation|Method, system, and program for moving data among storage units|
US20070143481A1|2004-03-31|2007-06-21|David Roxburgh|Method and apparatus for communicating data between computer devices|
US7243196B2|2003-11-28|2007-07-10|Hitachi, Ltd.|Disk array apparatus, and method for avoiding data corruption by simultaneous access by local and remote host device|
US20070161914A1|2003-01-24|2007-07-12|Mark Zdeblick|Methods and systems for measuring cardiac parameters|
US20070276882A1|2004-04-12|2007-11-29|Hajime Nishimura|Composite Memory Device, Data Wiring Method And Program|
US7337193B1|2002-05-02|2008-02-26|Palmsource, Inc.|Determining priority between data items|
US20080222218A1|2007-03-05|2008-09-11|Richards Elizabeth S|Risk-modulated proactive data migration for maximizing utility in storage systems|
US20090058600A1|2007-08-31|2009-03-05|3M Innovative Properties Company|Determining compatibility of components for assembling approved personal protection configurations|
US20090138523A1|2007-11-28|2009-05-28|Wan-Chang Pi|Content engine asynchronous upgrade framework|
US20100198792A1|2007-10-25|2010-08-05|Peter Thomas Camble|Data processing apparatus and method of processing data|
US20100198832A1|2007-10-25|2010-08-05|Kevin Loyd Jones|Data processing apparatus and method of processing data|
US20100235372A1|2007-10-25|2010-09-16|Peter Thomas Camble|Data processing apparatus and method of processing data|
US20110040763A1|2008-04-25|2011-02-17|Mark Lillibridge|Data processing apparatus and method of processing data|
US8001081B1|2002-05-31|2011-08-16|Access Co., Ltd.|Determining priority between data items in shared environments|
US8627025B2|2010-12-21|2014-01-07|Microsoft Corporation|Protecting data during different connectivity states|
US20140181048A1|2012-12-21|2014-06-26|Commvault Systems, Inc.|Filtered reference copy of secondary storage data in a data storage system|
US9229818B2|2011-07-20|2016-01-05|Microsoft Technology Licensing, Llc|Adaptive retention for backup data|
WO2016191610A1|2015-05-27|2016-12-01|Alibaba Group Holding Limited|Method and apparatus for real-time data migration|
US20160360412A1|2015-06-05|2016-12-08|Apple Inc.|System and method for migrating data between devices|
US20170090767A1|2015-09-29|2017-03-30|Seagate Technology Llc|Nondisruptive Device Replacement Using Progressive Background Copyback Operation|
US9665434B2|2007-10-25|2017-05-30|Hewlett Packard Enterprise Development Lp|Communicating chunks between devices|
US9824091B2|2010-12-03|2017-11-21|Microsoft Technology Licensing, Llc|File system backup using change journal|
US9870379B2|2010-12-21|2018-01-16|Microsoft Technology Licensing, Llc|Searching files|
US10515007B2|2017-03-31|2019-12-24|Intel Corporation|Technologies for remapping pending bit array read requests|US3771137A|1971-09-10|1973-11-06|Ibm|Memory control in a multipurpose system utilizing a broadcast|
US4371927A|1977-11-22|1983-02-01|Honeywell Information Systems Inc.|Data processing system programmable pre-read capability|
US4638424A|1984-01-12|1987-01-20|International Business Machines Corporation|Managing data storage devices connected to a digital computer|
US4823333A|1986-01-21|1989-04-18|Matsushita Electric Industrial Co., Ltd.|Optical disk duplicating apparatus using sector data identification information for controlling duplication|
US4980823A|1987-06-22|1990-12-25|International Business Machines Corporation|Sequential prefetching with deconfirmation|
US4974156A|1988-05-05|1990-11-27|International Business Machines|Multi-level peripheral data storage hierarchy with independent access to all levels of the hierarchy|
US5146578A|1989-05-01|1992-09-08|Zenith Data Systems Corporation|Method of varying the amount of data prefetched to a cache memory in dependence on the history of data requests|
EP0405861B1|1989-06-30|1995-08-16|Digital Equipment Corporation|Transferring data in a digital data processing system|
US5101492A|1989-11-03|1992-03-31|Compaq Computer Corporation|Data redundancy and recovery protection|
US5479654A|1990-04-26|1995-12-26|Squibb Data Systems, Inc.|Apparatus and method for reconstructing a file from a difference signature and an original file|
US5530941A|1990-08-06|1996-06-25|Ncr Corporation|System and method for prefetching data from a main computer memory into a cache memory|
US5269011A|1990-09-24|1993-12-07|Emc Corporation|Dynamically reconfigurable data storage system with storage system controllers selectively operable as channel adapters on storage device adapters|
US5544347A|1990-09-24|1996-08-06|Emc Corporation|Data storage system controlled remote data mirroring with respectively maintained data indices|
US5633999A|1990-11-07|1997-05-27|Nonstop Networks Limited|Workstation-implemented data storage re-routing for server fault-tolerance on computer networks|
JPH04205619A|1990-11-30|1992-07-27|Toshiba Corp|Disk control system|
US5212772A|1991-02-11|1993-05-18|Gigatrend Incorporated|System for storing data in backup tape device|
US5293609A|1991-04-19|1994-03-08|International Business Machines Corporation|Hit-density-based replacement for data cache with prefetching|
US5260990A|1991-04-30|1993-11-09|Boston Technology, Inc.|Multiple integrations unit for coupling different switching systems to a message storage system|
US5367698A|1991-10-31|1994-11-22|Epoch Systems, Inc.|Network file migration system|
US5483641A|1991-12-17|1996-01-09|Dell Usa, L.P.|System for scheduling readahead operations if new request is within a proximity of N last read requests wherein N is dependent on independent activities|
US5537566A|1991-12-17|1996-07-16|Fujitsu Limited|Apparatus and method for controlling background processing in disk array device|
US5493607A|1992-04-21|1996-02-20|Boston Technology|Multi-system network addressing|
US5459857A|1992-05-15|1995-10-17|Storage Technology Corporation|Fault tolerant disk array data storage subsystem|
US5381539A|1992-06-04|1995-01-10|Emc Corporation|System and method for dynamically controlling cache management|
AU4652493A|1992-06-18|1994-01-24|Andor Systems, Inc.|Remote dual copy of data in computer systems|
US5506986A|1992-07-14|1996-04-09|Electronic Data Systems Corporation|Media management system using historical data to access data sets from a plurality of data storage devices|
US5497483A|1992-09-23|1996-03-05|International Business Machines Corporation|Method and system for track transfer control during concurrent copy operations in a data processing storage subsystem|
US5581724A|1992-10-19|1996-12-03|Storage Technology Corporation|Dynamically mapped data storage subsystem having multiple open destage cylinders and method of managing that subsystem|
US5555371A|1992-12-17|1996-09-10|International Business Machines Corporation|Data backup copying with delayed directory updating and reduced numbers of DASD accesses at a back up site using a log structured array data storage|
EP0681721B1|1993-02-01|2005-03-23|Sun Microsystems, Inc.|Archiving file system for data servers in a distributed network environment|
US5522031A|1993-06-29|1996-05-28|Digital Equipment Corporation|Method and apparatus for the on-line restoration of a disk in a RAID-4 or RAID-5 array with concurrent access by applications|
US5535381A|1993-07-22|1996-07-09|Data General Corporation|Apparatus and method for copying and restoring disk files|
US5392244A|1993-08-19|1995-02-21|Hewlett-Packard Company|Memory systems with data storage redundancy management|
JP3249868B2|1993-11-19|2002-01-21|株式会社日立製作所|Array type storage system|
US5537585A|1994-02-25|1996-07-16|Avail Systems Corporation|Data storage management for network interconnected processors|
US5574950A|1994-03-01|1996-11-12|International Business Machines Corporation|Remote data shadowing using a multimode interface to dynamically reconfigure control link-level and communication link-level|
US5566317A|1994-06-14|1996-10-15|International Business Machines Corporation|Method and apparatus for computer disk drive management|
US5689732A|1994-06-21|1997-11-18|Sony Corporation|Apparatus for recording and reproducing data having a single recording and reproducing unit and a plurality of detachable interfaces for connecting to different types of computer ports|
US5435004A|1994-07-21|1995-07-18|International Business Machines Corporation|Computerized system and method for data backup|
CA2154089A1|1994-07-22|1996-01-23|Gerald W. Weare|Remote subscriber migration|
JP3687111B2|1994-08-18|2005-08-24|株式会社日立製作所|Storage device system and storage device control method|
US5564037A|1995-03-29|1996-10-08|Cheyenne Software International Sales Corp.|Real time data migration system and method employing sparse files|
US5680640A|1995-09-01|1997-10-21|Emc Corporation|System for migrating data by selecting a first or second transfer means based on the status of a data element map initialized to a predetermined state|
US5706467A|1995-09-05|1998-01-06|Emc Corporation|Sequential cache management system utilizing the establishment of a microcache and managing the contents of such according to a threshold comparison|
US5592432A|1995-09-05|1997-01-07|Emc Corp|Cache management system using time stamping for replacement queue|
US5819020A|1995-10-16|1998-10-06|Network Specialists, Inc.|Real time backup system|
US5657486A|1995-12-07|1997-08-12|Teradyne, Inc.|Automatic test equipment with pipelined sequencer|
US6405294B1|1995-12-29|2002-06-11|Mci Communications Corporation|Data center migration method and system using data mirroring|
US5835954A|1996-09-12|1998-11-10|International Business Machines Corporation|Target DASD controlled data migration move|US5544347A|1990-09-24|1996-08-06|Emc Corporation|Data storage system controlled remote data mirroring with respectively maintained data indices|
US6052797A|1996-05-28|2000-04-18|Emc Corporation|Remotely mirrored data storage system with a count indicative of data consistency|
US5901327A|1996-05-28|1999-05-04|Emc Corporation|Bundling of write data from channel commands in a command chain for transmission over a data link between data storage systems for remote data mirroring|
US5680640A|1995-09-01|1997-10-21|Emc Corporation|System for migrating data by selecting a first or second transfer means based on the status of a data element map initialized to a predetermined state|
JP3287203B2|1996-01-10|2002-06-04|株式会社日立製作所|External storage controller and data transfer method between external storage controllers|
US7114049B2|1997-01-08|2006-09-26|Hitachi, Ltd.|Adaptive remote copy in a heterogeneous environment|
JPH09212371A|1996-02-07|1997-08-15|Nec Corp|Register saving and restoring system|
JPH09237162A|1996-02-23|1997-09-09|Hewlett Packard Co <Hp>|Scanning data storage system, its stylus and medium abrasion managing methods and remaining life display device|
JP3641872B2|1996-04-08|2005-04-27|株式会社日立製作所|Storage system|
US5933653A|1996-05-31|1999-08-03|Emc Corporation|Method and apparatus for mirroring data in a remote data storage system|
US5857208A|1996-05-31|1999-01-05|Emc Corporation|Method and apparatus for performing point in time backup operation in a computer system|
US5870733A|1996-06-14|1999-02-09|Electronic Data Systems Corporation|Automated system and method for providing access data concerning an item of business property|
US5835954A|1996-09-12|1998-11-10|International Business Machines Corporation|Target DASD controlled data migration move|
JP3193880B2|1996-12-11|2001-07-30|株式会社日立製作所|Data migration method|
US7213114B2|2001-05-10|2007-05-01|Hitachi, Ltd.|Remote copy for a storage controller in a heterogeneous environment|
US5943689A|1997-03-31|1999-08-24|Emc Corporation|On-demand initialization of memory locations as they are requested command|
JP3671595B2|1997-04-01|2005-07-13|株式会社日立製作所|Compound computer system and compound I / O system|
JP3414218B2|1997-09-12|2003-06-09|株式会社日立製作所|Storage controller|
US6145066A|1997-11-14|2000-11-07|Amdahl Corporation|Computer system with transparent data migration between storage volumes|
US6115463A|1997-11-21|2000-09-05|Telefonaktiebolaget Lm Ericsson |Migration of subscriber data between home location registers of a telecommunications system|
JP3410010B2|1997-12-24|2003-05-26|株式会社日立製作所|Subsystem migration method and information processing system|
US6631477B1|1998-03-13|2003-10-07|Emc Corporation|Host system for mass storage business continuance volumes|
US6192488B1|1998-07-13|2001-02-20|Chung-Ping Li|Restoring method for hard disk|
US7167962B2|1999-08-19|2007-01-23|Hitachi, Ltd.|Remote copy for a storage controller with reduced data size|
DE69938378T2|1998-08-20|2009-04-30|Hitachi, Ltd.|Copy data to storage systems|
US6188702B1|1998-11-17|2001-02-13|Inrange Technologies Corporation|High speed linking module|
US6389488B1|1999-01-28|2002-05-14|Advanced Micro Devices, Inc.|Read ahead buffer for read accesses to system memory by input/output devices with buffer valid indication|
US6408399B1|1999-02-24|2002-06-18|Lucent Technologies Inc.|High reliability multiple processing and control system utilizing shared components|
JP3948692B2|1999-03-26|2007-07-25|シャープ株式会社|Semiconductor memory device|
US6370626B1|1999-04-30|2002-04-09|Emc Corporation|Method and apparatus for independent and simultaneous access to a common data set|
US6363385B1|1999-06-29|2002-03-26|Emc Corporation|Method and apparatus for making independent data copies in a data processing system|
US6430118B1|1999-08-18|2002-08-06|Intel Corporation|Data storage utilizing parity data to enhance performance|
US6560726B1|1999-08-19|2003-05-06|Dell Usa, L.P.|Method and system for automated technical support for computers|
US6760708B1|1999-08-19|2004-07-06|Dell Products L.P.|Method and system for migrating stored data to a build-to-order computing system|
US6381619B1|1999-09-13|2002-04-30|Hewlett-Packard Company|Computer data storage system with migration plan generator|
US6571258B1|1999-09-13|2003-05-27|Hewlett Packard Development Company L.P.|Computer data storage system with parallelization migration plan generator|
US6539499B1|1999-10-06|2003-03-25|Dell Usa, L.P.|Graphical interface, method, and system for the provision of diagnostic and support services in a computer system|
US6574615B1|1999-10-06|2003-06-03|Dell Usa, L.P.|System and method for monitoring support activity|
US6563698B1|1999-10-06|2003-05-13|Dell Usa, L.P.|System and method for providing a computer system with a detachable component|
US6556431B1|1999-10-06|2003-04-29|Dell Usa, L.P.|System and method for converting alternating current into direct current|
US6598223B1|1999-10-06|2003-07-22|Dell Usa, L.P.|Method and system for installing and testing build-to-order components in a defined configuration computer system|
US6564220B1|1999-10-06|2003-05-13|Dell Usa, L.P.|System and method for monitoring support activity|
US6606716B1|1999-10-06|2003-08-12|Dell Usa, L.P.|Method and system for automated technical support for computers|
US6317316B1|1999-10-06|2001-11-13|Dell Usa, L.P.|Method and system for integrated personal computer components|
TW454120B|1999-11-11|2001-09-11|Miralink Corp|Flexible remote data mirroring|
JP3922857B2|1999-12-13|2007-05-30|パイオニア株式会社|Navigation system|
US6571354B1|1999-12-15|2003-05-27|Dell Products, L.P.|Method and apparatus for storage unit replacement according to array priority|
US6601153B1|1999-12-31|2003-07-29|Unisys Corporation|Method and apparatus for increasing computer performance through asynchronous memory block initialization|
JP4434407B2|2000-01-28|2010-03-17|株式会社日立製作所|Subsystem and integrated system thereof|
JP3918394B2|2000-03-03|2007-05-23|株式会社日立製作所|Data migration method|
JP2001256003A|2000-03-10|2001-09-21|Hitachi Ltd|Disk array controller, its disk array control unit and its expanding method|
US6631452B1|2000-04-28|2003-10-07|Idea Corporation|Register stack engine having speculative load/store modes|
DE60112589T2|2001-11-13|2006-06-22|Hitachi, Ltd.|Computer data migration device and method therefor|
JP4175764B2|2000-05-18|2008-11-05|株式会社日立製作所|Computer system|
JP2002014777A|2000-06-29|2002-01-18|Hitachi Ltd|Data moving method and protocol converting device, and switching device using the same|
JP3992427B2|2000-08-01|2007-10-17|株式会社日立製作所|File system|
US6721868B1|2000-08-09|2004-04-13|Intel Corporation|Redirecting memory accesses for headless systems|
US6823336B1|2000-09-26|2004-11-23|Emc Corporation|Data storage system and method for uninterrupted read-only access to a consistent dataset by one host processor concurrent with read-write access by another host processor|
US6434682B1|2000-09-28|2002-08-13|International Business Machines Corporation|Data management system with shortcut migration via efficient automatic reconnection to previously migrated copy|
US6697895B1|2000-11-10|2004-02-24|Spectra Logic Corporation|Network attached tape storage system|
US6671774B1|2000-11-10|2003-12-30|Emc Corporation|Method and apparatus for performing swap analysis|
US7620665B1|2000-11-21|2009-11-17|International Business Machines Corporation|Method and system for a generic metadata-based mechanism to migrate relational data between databases|
US6557089B1|2000-11-28|2003-04-29|International Business Machines Corporation|Backup by ID-suppressed instant virtual copy then physical backup copy with ID reintroduced|
AT361500T|2000-12-15|2007-05-15|Ibm|METHOD AND SYSTEM FOR SCALABLE, HIGH-PERFORMANCE HIERARCHICAL MEMORY MANAGEMENT|
EP1215590B1|2000-12-15|2007-05-02|International Business Machines Corporation|Method and system for scalable, high performance hierarchical storage management|
JP2002189570A|2000-12-20|2002-07-05|Hitachi Ltd|Duplex method for storage system, and storage system|
US6853978B2|2001-02-23|2005-02-08|Power Measurement Ltd.|System and method for manufacturing and configuring intelligent electronic devices to order|
US7085824B2|2001-02-23|2006-08-01|Power Measurement Ltd.|Systems for in the field configuration of intelligent electronic devices|
US7143252B2|2001-05-10|2006-11-28|Hitachi, Ltd.|Storage apparatus system and method of data backup|
US7194590B2|2001-02-28|2007-03-20|Hitachi, Ltd.|Three data center adaptive remote copy|
US6785836B2|2001-04-11|2004-08-31|Broadcom Corporation|In-place data transformation for fault-tolerant disk storage systems|
US7167965B2|2001-04-30|2007-01-23|Hewlett-Packard Development Company, L.P.|Method and system for online data migration on storage systems with performance guarantees|
GB2375847B|2001-05-22|2005-03-16|Hewlett Packard Co|Protection and restoration of RAID configuration information in disaster recovery process|
US20020188774A1|2001-06-08|2002-12-12|Lessard Michael R.|Virtualizing external data as native data|
JP2003015826A|2001-07-04|2003-01-17|Hitachi Ltd|Shared memory copy function in disk array controller|
JP4689137B2|2001-08-08|2011-05-25|株式会社日立製作所|Remote copy control method and storage system|
US6640291B2|2001-08-10|2003-10-28|Hitachi, Ltd.|Apparatus and method for online data migration with remote copy|
US20050257216A1|2001-09-10|2005-11-17|David Cornell|Method and apparatus for facilitating deployment of software applications with minimum system downtime|
CN1307580C|2001-09-26|2007-03-28|Emc公司|Efficient management of large files|
US20030064811A1|2001-09-28|2003-04-03|Greg Schlottmann|Gaming device with write only mass storage|
US6832289B2|2001-10-11|2004-12-14|International Business Machines Corporation|System and method for migrating data|
US6751301B1|2001-10-19|2004-06-15|Unisys Corporation|Administration tool for supporting information technologysystem migrations|
US6976139B2|2001-11-14|2005-12-13|Emc Corporation|Reversing a communication path between storage devices|
US6701392B1|2001-11-14|2004-03-02|Emc Corporation|Hierarchical approach to indentifying changing device characteristics|
US6862632B1|2001-11-14|2005-03-01|Emc Corporation|Dynamic RDF system for transferring initial data between source and destination volume wherein data maybe restored to either volume at same time other data is written|
JP2003162378A|2001-11-26|2003-06-06|Hitachi Ltd|Method for duplicating data|
JP4434543B2|2002-01-10|2010-03-17|株式会社日立製作所|Distributed storage system, storage device, and data copying method|
US6728791B1|2002-01-16|2004-04-27|Adaptec, Inc.|RAID 1 read mirroring method for host adapters|
US6701385B1|2002-01-16|2004-03-02|Adaptec, Inc.|Raid 1 write mirroring method for host adapters|
JP4039658B2|2002-02-08|2008-01-30|株式会社東芝|Software management method, communication system, terminal, access point, security countermeasure file download method used in communication system terminal|
JP2003296039A|2002-04-02|2003-10-17|Hitachi Ltd|Cluster configuration storage system and method for controlling the same|
US7076690B1|2002-04-15|2006-07-11|Emc Corporation|Method and apparatus for managing access to volumes of storage|
JP4704659B2|2002-04-26|2011-06-15|株式会社日立製作所|Storage system control method and storage control device|
JP2003316522A|2002-04-26|2003-11-07|Hitachi Ltd|Computer system and method for controlling the same system|
US7546364B2|2002-05-16|2009-06-09|Emc Corporation|Replication of remote copy data for internet protocoltransmission|
JP2004013215A|2002-06-03|2004-01-15|Hitachi Ltd|Storage system, storage sub-system, and information processing system including them|
US7584131B1|2002-07-31|2009-09-01|Ameriprise Financial, Inc.|Method for migrating financial and indicative plan data between computerized record keeping systems without a blackout period|
US6952758B2|2002-07-31|2005-10-04|International Business Machines Corporation|Method and system for providing consistent data modification information to clients in a storage system|
US7707151B1|2002-08-02|2010-04-27|Emc Corporation|Method and apparatus for migrating data|
US7571206B2|2002-08-12|2009-08-04|Equallogic, Inc.|Transparent request routing for a partitioned application service|
US7047377B2|2002-08-20|2006-05-16|Gruintine Pueche, Inc.|System and method for conducting an auction-based ranking of search results on a computer network|
JP3781369B2|2002-09-02|2006-05-31|株式会社日立製作所|Storage subsystem|
JP2004102374A|2002-09-05|2004-04-02|Hitachi Ltd|Information processing system having data transition device|
JP2004110367A|2002-09-18|2004-04-08|Hitachi Ltd|Storage system control method, storage control device, and storage system|
JP2004110613A|2002-09-20|2004-04-08|Toshiba Corp|Controller, control program, objective device, and control system|
US20040078521A1|2002-10-17|2004-04-22|International Business Machines Corporation|Method, apparatus and computer program product for emulating an iSCSI device on a logical volume manager|
US7546482B2|2002-10-28|2009-06-09|Emc Corporation|Method and apparatus for monitoring the storage of data in a computer system|
US7263593B2|2002-11-25|2007-08-28|Hitachi, Ltd.|Virtualization controller and data transfer control method|
JP4352693B2|2002-12-10|2009-10-28|株式会社日立製作所|Disk array control device and control method thereof|
US7376764B1|2002-12-10|2008-05-20|Emc Corporation|Method and apparatus for migrating data in a computer system|
US7080225B1|2002-12-10|2006-07-18|Emc Corporation|Method and apparatus for managing migration of data in a computer system|
US6959370B2|2003-01-03|2005-10-25|Hewlett-Packard Development Company, L.P.|System and method for migrating data between memories|
JP2004220450A|2003-01-16|2004-08-05|Hitachi Ltd|Storage device, its introduction method and its introduction program|
US7627650B2|2003-01-20|2009-12-01|Equallogic, Inc.|Short-cut response for distributed services|
US7461146B2|2003-01-20|2008-12-02|Equallogic, Inc.|Adaptive storage block data distribution|
US8499086B2|2003-01-21|2013-07-30|Dell Products L.P.|Client load distribution|
US8037264B2|2003-01-21|2011-10-11|Dell Products, L.P.|Distributed snapshot process|
US7127577B2|2003-01-21|2006-10-24|Equallogic Inc.|Distributed snapshot process|
US20040210724A1|2003-01-21|2004-10-21|Equallogic Inc.|Block data migration|
US7937551B2|2003-01-21|2011-05-03|Dell Products L.P.|Storage systems having differentiated storage pools|
US6981117B2|2003-01-29|2005-12-27|International Business Machines Corporation|Method, system, and program for transferring data|
WO2004077216A2|2003-01-30|2004-09-10|Vaman TechnologiesLimited|System and method for heterogeneous data migration in real-time|
JP4651913B2|2003-02-17|2011-03-16|株式会社日立製作所|Storage system|
JP3974538B2|2003-02-20|2007-09-12|株式会社日立製作所|Information processing system|
JP2004258944A|2003-02-26|2004-09-16|Hitachi Ltd|Storage device and method for managing it|
JP4165747B2|2003-03-20|2008-10-15|株式会社日立製作所|Storage system, control device, and control device program|
JP4267353B2|2003-03-28|2009-05-27|株式会社日立製作所|Data migration support system and data migration support method|
US7870218B2|2003-04-09|2011-01-11|Nec Laboratories America, Inc.|Peer-to-peer system and method with improved utilization|
JP2004318743A|2003-04-21|2004-11-11|Hitachi Ltd|File transfer device|
US7093088B1|2003-04-23|2006-08-15|Emc Corporation|Method and apparatus for undoing a data migration in a computer system|
US7080221B1|2003-04-23|2006-07-18|Emc Corporation|Method and apparatus for managing migration of data in a clustered computer system environment|
US7263590B1|2003-04-23|2007-08-28|Emc Corporation|Method and apparatus for migrating data in a computer system|
US7805583B1|2003-04-23|2010-09-28|Emc Corporation|Method and apparatus for migrating data in a clustered computer system environment|
US7415591B1|2003-04-23|2008-08-19|Emc Corporation|Method and apparatus for migrating data and automatically provisioning a target for the migration|
US7260739B2|2003-05-09|2007-08-21|International Business Machines Corporation|Method, apparatus and program storage device for allowing continuous availability of data during volume set failures in a mirrored environment|
JP2004348464A|2003-05-22|2004-12-09|Hitachi Ltd|Storage device and communication signal shaping circuit|
JP4060235B2|2003-05-22|2008-03-12|株式会社日立製作所|Disk array device and disk array device control method|
US7165187B2|2003-06-06|2007-01-16|Hewlett-Packard Development Company, L.P.|Batch based distributed data redundancy|
US7287137B2|2003-06-06|2007-10-23|Hewlett-Packard Development Company, L.P.|Batched, asynchronous data redundancy technique|
US7380081B2|2003-06-06|2008-05-27|Hewlett-Packard Development Company, L.P.|Asynchronous data redundancy technique|
US7178055B2|2003-06-06|2007-02-13|Hewlett-Packard Development Company, L.P.|Method and system for ensuring data consistency after a failover event in a redundant data storage system|
US7089383B2|2003-06-06|2006-08-08|Hewlett-Packard Development Company, L.P.|State machine and system for data redundancy|
US7120825B2|2003-06-06|2006-10-10|Hewlett-Packard Development Company, L.P.|Adaptive batch sizing for asynchronous data redundancy|
US7152182B2|2003-06-06|2006-12-19|Hewlett-Packard Development Company, L.P.|Data redundancy system and method|
US20040250030A1|2003-06-06|2004-12-09|Minwen Ji|Data redundancy using portal and host computer|
JP4149315B2|2003-06-12|2008-09-10|インターナショナル・ビジネス・マシーンズ・コーポレーション|Backup system|
US7085892B2|2003-06-17|2006-08-01|International Business Machines Corporation|Method, system, and program for removing data in cache subject to a relationship|
US20040260735A1|2003-06-17|2004-12-23|Martinez Richard Kenneth|Method, system, and program for assigning a timestamp associated with data|
JP4462852B2|2003-06-23|2010-05-12|株式会社日立製作所|Storage system and storage system connection method|
JP2005018193A|2003-06-24|2005-01-20|Hitachi Ltd|Interface command control method for disk device, and computer system|
US7111136B2|2003-06-26|2006-09-19|Hitachi, Ltd.|Method and apparatus for backup and recovery system using storage based journaling|
US7398422B2|2003-06-26|2008-07-08|Hitachi, Ltd.|Method and apparatus for data recovery system using storage based journaling|
JP4124348B2|2003-06-27|2008-07-23|株式会社日立製作所|Storage system|
US7379974B2|2003-07-14|2008-05-27|International Business Machines Corporation|Multipath data retrieval from redundant array|
US20050015416A1|2003-07-16|2005-01-20|Hitachi, Ltd.|Method and apparatus for data recovery using storage based journaling|
US7047380B2|2003-07-22|2006-05-16|Acronis Inc.|System and method for using file system snapshots for online data backup|
US7246211B1|2003-07-22|2007-07-17|Swsoft Holdings, Ltd.|System and method for using file system snapshots for online data backup|
US20050022213A1|2003-07-25|2005-01-27|Hitachi, Ltd.|Method and apparatus for synchronizing applications for data recovery using storage based journaling|
JP2005056200A|2003-08-05|2005-03-03|Hitachi Ltd|Data management method, disk storage device and disk storage system|
US7873684B2|2003-08-14|2011-01-18|Oracle International Corporation|Automatic and dynamic provisioning of databases|
US6996635B2|2003-08-22|2006-02-07|International Business Machines Corporation|Apparatus and method to activate transparent data storage drive firmware updates|
US20060294039A1|2003-08-29|2006-12-28|Mekenkamp Gerhardus E|File migration history controls updating or pointers|
JP4349871B2|2003-09-09|2009-10-21|株式会社日立製作所|File sharing apparatus and data migration method between file sharing apparatuses|
US7219201B2|2003-09-17|2007-05-15|Hitachi, Ltd.|Remote storage disk control device and method for controlling the same|
JP4598387B2|2003-09-17|2010-12-15|株式会社日立製作所|Storage system|
US20050071546A1|2003-09-25|2005-03-31|Delaney William P.|Systems and methods for improving flexibility in scaling of a storage system|
JP4307202B2|2003-09-29|2009-08-05|株式会社日立製作所|Storage system and storage control device|
US7441052B2|2003-09-29|2008-10-21|Hitachi Data Systems Corporation|Methods and apparatuses for providing copies of stored data for disaster recovery and other uses|
US20050071560A1|2003-09-30|2005-03-31|International Business Machines Corp.|Autonomic block-level hierarchical storage management for storage networks|
US20050083862A1|2003-10-20|2005-04-21|Kongalath George P.|Data migration method, system and node|
JP4384470B2|2003-10-21|2009-12-16|株式会社日立製作所|Storage device management method|
US8655755B2|2003-10-22|2014-02-18|Scottrade, Inc.|System and method for the automated brokerage of financial instruments|
US20050091304A1|2003-10-27|2005-04-28|Advanced Premise Technologies, Llc|Telecommunications device and method|
US7146475B2|2003-11-18|2006-12-05|Mainstar Software Corporation|Data set level mirroring to accomplish a volume merge/migrate in a digital data storage system|
JP2005157521A|2003-11-21|2005-06-16|Hitachi Ltd|Method for monitoring state information of remote storage device and storage sub-system|
JP4307964B2|2003-11-26|2009-08-05|株式会社日立製作所|Access restriction information setting method and apparatus|
JP4156499B2|2003-11-28|2008-09-24|株式会社日立製作所|Disk array device|
US20050131965A1|2003-12-11|2005-06-16|Lam Wai T.|System and method for replicating data|
JP4412989B2|2003-12-15|2010-02-10|株式会社日立製作所|Data processing system having a plurality of storage systems|
US8244903B2|2003-12-22|2012-08-14|Emc Corporation|Data streaming and backup systems having multiple concurrent read threads for improved small file performance|
US7206795B2|2003-12-22|2007-04-17|Jean-Pierre Bono|Prefetching and multithreading for improved file read performance|
JP4320247B2|2003-12-24|2009-08-26|株式会社日立製作所|Configuration information setting method and apparatus|
JP4497918B2|2003-12-25|2010-07-07|株式会社日立製作所|Storage system|
US7296193B2|2004-01-07|2007-11-13|International Business Machines Corporation|Technique for processing an error using write-to-operator-with-reply in a ported application|
JP4500057B2|2004-01-13|2010-07-14|株式会社日立製作所|Data migration method|
JP3894196B2|2004-01-13|2007-03-14|株式会社日立製作所|Storage controller|
JP2005202893A|2004-01-19|2005-07-28|Hitachi Ltd|Storage device controller, storage system, recording medium recording program, information processor, and method for controlling storage system|
JP4554949B2|2004-01-23|2010-09-29|株式会社日立製作所|Management computer and storage device management method|
JP4477370B2|2004-01-30|2010-06-09|株式会社日立製作所|Data processing system|
JP4634049B2|2004-02-04|2011-02-23|株式会社日立製作所|Error notification control in disk array system|
US8311974B2|2004-02-20|2012-11-13|Oracle International Corporation|Modularized extraction, transformation, and loading for a database|
JP4391265B2|2004-02-26|2009-12-24|株式会社日立製作所|Storage subsystem and performance tuning method|
JP4520755B2|2004-02-26|2010-08-11|株式会社日立製作所|Data migration method and data migration apparatus|
US7533181B2|2004-02-26|2009-05-12|International Business Machines Corporation|Apparatus, system, and method for data access management|
JP4497957B2|2004-03-05|2010-07-07|株式会社日立製作所|Storage control system|
US7844586B2|2004-03-31|2010-11-30|Sap|Methods and systems in monitoring tools for effective data retrieval|
JP2005309550A|2004-04-19|2005-11-04|Hitachi Ltd|Remote copying method and system|
JP2005321913A|2004-05-07|2005-11-17|Hitachi Ltd|Computer system with file sharing device, and transfer method of file sharing device|
US7124143B2|2004-05-10|2006-10-17|Hitachi, Ltd.|Data migration in storage system|
JP2005326935A|2004-05-12|2005-11-24|Hitachi Ltd|Management server for computer system equipped with virtualization storage and failure preventing/restoring method|
US7571173B2|2004-05-14|2009-08-04|Oracle International Corporation|Cross-platform transportable database|
US8554806B2|2004-05-14|2013-10-08|Oracle International Corporation|Cross platform transportable tablespaces|
JP4452557B2|2004-05-27|2010-04-21|株式会社日立製作所|Remote copy with WORM guarantee|
JP4421385B2|2004-06-09|2010-02-24|株式会社日立製作所|Computer system|
US7613889B2|2004-06-10|2009-11-03|International Business Machines Corporation|System, method, and program for determining if write data overlaps source data within a data migration scheme|
US7685129B1|2004-06-18|2010-03-23|Emc Corporation|Dynamic data set migration|
US7707186B2|2004-06-18|2010-04-27|Emc Corporation|Method and apparatus for data set migration|
US7783798B1|2004-06-25|2010-08-24|Emc Corporation|System and method for managing use of available bandwidth for a link used for movement of data being copied in a data storage environment|
JP4387261B2|2004-07-15|2009-12-16|株式会社日立製作所|Computer system and storage system migration method|
JP2006039814A|2004-07-26|2006-02-09|Hitachi Ltd|Network storage system, and transfer method among multiple network storages|
US7058731B2|2004-08-03|2006-06-06|Hitachi, Ltd.|Failover and data migration using data replication|
JP4519563B2|2004-08-04|2010-08-04|株式会社日立製作所|Storage system and data processing system|
JP2006048313A|2004-08-04|2006-02-16|Hitachi Ltd|Method for managing storage system managed by a plurality of administrators|
JP4504762B2|2004-08-19|2010-07-14|株式会社日立製作所|Storage network migration method, management apparatus, management program, and storage network system|
US7296024B2|2004-08-19|2007-11-13|Storage Technology Corporation|Method, apparatus, and computer program product for automatically migrating and managing migrated data transparently to requesting applications|
JP4646574B2|2004-08-30|2011-03-09|株式会社日立製作所|Data processing system|
US7171532B2|2004-08-30|2007-01-30|Hitachi, Ltd.|Method and system for data lifecycle management in an external storage linkage environment|
JP4498867B2|2004-09-16|2010-07-07|株式会社日立製作所|Data storage management method and data life cycle management system|
JP4438582B2|2004-09-22|2010-03-24|株式会社日立製作所|Data migration method|
JP4568574B2|2004-10-15|2010-10-27|株式会社日立製作所|Storage device introduction method, program, and management computer|
JP4640770B2|2004-10-15|2011-03-02|株式会社日立製作所|Control device connected to external device|
JP2006127028A|2004-10-27|2006-05-18|Hitachi Ltd|Memory system and storage controller|
JP4585276B2|2004-11-01|2010-11-24|株式会社日立製作所|Storage system|
JP2006134049A|2004-11-05|2006-05-25|Hitachi Ltd|Device and method generating logic path between connection part of controller connected with host device and storage device equipped by the controller|
JP2006146476A|2004-11-18|2006-06-08|Hitachi Ltd|Storage system and data transfer method of storage system|
US7271996B2|2004-12-03|2007-09-18|Electro Industries/Gauge Tech|Current inputs interface for an electrical device|
US7743171B1|2004-12-16|2010-06-22|Emc Corporation|Formatting and initialization of device mirrors using initialization indicators|
US7343467B2|2004-12-20|2008-03-11|Emc Corporation|Method to perform parallel data migration in a clustered storage environment|
JP2006178811A|2004-12-24|2006-07-06|Hitachi Ltd|Storage system, and path control method for the system|
JP4634136B2|2004-12-24|2011-02-23|株式会社日立製作所|Storage control system|
US7702777B2|2004-12-28|2010-04-20|Lenovo Pte Ltd.|Centralized software maintenance of blade computer system|
US7490200B2|2005-02-10|2009-02-10|International Business Machines Corporation|L2 cache controller with slice directory and unified cache structure|
US7469318B2|2005-02-10|2008-12-23|International Business Machines Corporation|System bus structure for large L2 cache array topology with different latency domains|
US7366841B2|2005-02-10|2008-04-29|International Business Machines Corporation|L2 cache array topology for large cache with different latency domains|
US7308537B2|2005-02-10|2007-12-11|International Business Machines Corporation|Half-good mode for large L2 cache array topology with different latency domains|
US7363317B2|2005-02-15|2008-04-22|International Business Machines Corporation|Memory efficient XML shredding with partial commit|
JP4927339B2|2005-02-23|2012-05-09|株式会社日立製作所|Storage control device and control method thereof|
US8103640B2|2005-03-02|2012-01-24|International Business Machines Corporation|Method and apparatus for role mapping methodology for user registry migration|
KR100721571B1|2005-03-07|2007-05-23|삼성에스디아이 주식회사|Organic light emitting device and fabrication method of the same|
JP2006260240A|2005-03-17|2006-09-28|Hitachi Ltd|Computer system, storage device, computer software and data migration method|
US7281104B1|2005-03-21|2007-10-09|Acronis Inc.|System and method for online data migration|
JP4157536B2|2005-03-29|2008-10-01|富士通株式会社|Program execution device, program execution method, and service providing program|
US7868896B1|2005-04-12|2011-01-11|American Megatrends, Inc.|Method, apparatus, and computer-readable medium for utilizing an alternate video buffer for console redirection in a headless computer system|
JP2006293864A|2005-04-13|2006-10-26|Hitachi Ltd|Storage system, data movement management system, and data movement management method|
JP2006309483A|2005-04-28|2006-11-09|Hitachi Ltd|Storage device and storage system|
US7502872B2|2005-05-23|2009-03-10|International Bsuiness Machines Corporation|Method for out of user space block mode I/O directly between an application instance and an I/O adapter|
US20070005815A1|2005-05-23|2007-01-04|Boyd William T|System and method for processing block mode I/O operations using a linear block address translation protection table|
US7552240B2|2005-05-23|2009-06-23|International Business Machines Corporation|Method for user space operations for direct I/O between an application instance and an I/O adapter|
US7502871B2|2005-05-23|2009-03-10|International Business Machines Corporation|Method for query/modification of linear block address table entries for direct I/O|
US7464189B2|2005-05-23|2008-12-09|International Business Machines Corporation|System and method for creation/deletion of linear block address table entries for direct I/O|
US20060265525A1|2005-05-23|2006-11-23|Boyd William T|System and method for processor queue to linear block address translation using protection table control based on a protection domain|
JP2006331158A|2005-05-27|2006-12-07|Hitachi Ltd|Storage system|
JP4741304B2|2005-07-11|2011-08-03|株式会社日立製作所|Data migration method or data migration system|
KR100628102B1|2005-08-24|2006-09-26|엘지전자 주식회사|Mobile communication terminal with transferring message and activating received message and method using same|
JP2007058728A|2005-08-26|2007-03-08|Hitachi Ltd|Data transfer system|
US7577761B2|2005-08-31|2009-08-18|International Business Machines Corporation|Out of user space I/O directly between a host system and a physical adapter using file based linear block address translation|
US20070168567A1|2005-08-31|2007-07-19|Boyd William T|System and method for file based I/O directly between an application instance and an I/O adapter|
US7657662B2|2005-08-31|2010-02-02|International Business Machines Corporation|Processing user space operations directly between an application instance and an I/O adapter|
US7500071B2|2005-08-31|2009-03-03|International Business Machines Corporation|Method for out of user space I/O with server authentication|
US7702851B2|2005-09-20|2010-04-20|Hitachi, Ltd.|Logical volume transfer method and storage network system|
JP4700459B2|2005-09-27|2011-06-15|株式会社日立製作所|Data processing system, data management method, and storage system|
US7778960B1|2005-10-20|2010-08-17|American Megatrends, Inc.|Background movement of data between nodes in a storage cluster|
US8010829B1|2005-10-20|2011-08-30|American Megatrends, Inc.|Distributed hot-spare storage in a storage cluster|
US7996608B1|2005-10-20|2011-08-09|American Megatrends, Inc.|Providing redundancy in a storage system|
US8010485B1|2005-10-20|2011-08-30|American Megatrends, Inc.|Background movement of data between nodes in a storage cluster|
KR100763526B1|2005-12-12|2007-10-04|한국전자통신연구원|Device and method for management of application context|
US7634618B2|2006-01-03|2009-12-15|Emc Corporation|Methods, systems, and computer program products for optimized copying of logical unitsin a redundant array of inexpensive disksenvironment using buffers that are smaller than LUN delta map chunks|
US7634617B2|2006-01-03|2009-12-15|Emc Corporation|Methods, systems, and computer program products for optimized copying of logical unitsin a redundant array of inexpensive disksenvironment using buffers that are larger than LUN delta map chunks|
US20070162691A1|2006-01-06|2007-07-12|Bhakta Snehal S|Apparatus and method to store information|
US20070214313A1|2006-02-21|2007-09-13|Kalos Matthew J|Apparatus, system, and method for concurrent RAID array relocation|
GB0606639D0|2006-04-01|2006-05-10|Ibm|Non-disruptive file system element reconfiguration on disk expansion|
US7809892B1|2006-04-03|2010-10-05|American Megatrends Inc.|Asynchronous data replication|
JP4900784B2|2006-04-13|2012-03-21|株式会社日立製作所|Storage system and storage system data migration method|
US8131682B2|2006-05-11|2012-03-06|Hitachi, Ltd.|System and method for replacing contents addressable storage|
JP2007310618A|2006-05-18|2007-11-29|Fujitsu Ltd|Hierarchical storage device and its recording medium management method|
US20070297433A1|2006-06-26|2007-12-27|Mediatek Inc.|Method and apparatus for double buffering|
US7930496B2|2006-06-29|2011-04-19|International Business Machines Corporation|Processing a read request to a logical volume while relocating a logical volume from a first storage location to a second storage location using a copy relationship|
US8140785B2|2006-06-29|2012-03-20|International Business Machines Corporation|Updating metadata in a logical volume associated with a storage controller for data units indicated in a data structure|
US7555575B2|2006-07-27|2009-06-30|Hitachi, Ltd.|Method and apparatus for migrating data between storage volumes of different data pattern|
JP2008065486A|2006-09-05|2008-03-21|Hitachi Ltd|Storage system and its data migration method|
JP2008117253A|2006-11-07|2008-05-22|Hitachi Ltd|Storage device system, computer system and processing method therefor|
US8909599B2|2006-11-16|2014-12-09|Oracle International Corporation|Efficient migration of binary XML across databases|
JP2008146574A|2006-12-13|2008-06-26|Hitachi Ltd|Storage controller and storage control method|
JP2008165624A|2006-12-28|2008-07-17|Hitachi Ltd|Computer system and first storage device|
US7822933B1|2007-01-04|2010-10-26|Symantec Operating Corporation|Enabling off-host data migration using volume translation mappings, snappoint maps and linked volume technologies|
JP2007115287A|2007-01-24|2007-05-10|Hitachi Ltd|Storage controller|
US20080181107A1|2007-01-30|2008-07-31|Moorthi Jay R|Methods and Apparatus to Map and Transfer Data and Properties Between Content-Addressed Objects and Data Files|
US8046548B1|2007-01-30|2011-10-25|American Megatrends, Inc.|Maintaining data consistency in mirrored cluster storage systems using bitmap write-intent logging|
US7908448B1|2007-01-30|2011-03-15|American Megatrends, Inc.|Maintaining data consistency in mirrored cluster storage systems with write-back cache|
US8498967B1|2007-01-30|2013-07-30|American Megatrends, Inc.|Two-node high availability cluster storage solution using an intelligent initiator to avoid split brain syndrome|
US8108580B1|2007-04-17|2012-01-31|American Megatrends, Inc.|Low latency synchronous replication using an N-way router|
US7856022B1|2007-06-28|2010-12-21|Emc Corporation|Non-disruptive data migration with external virtualization engine|
US8990527B1|2007-06-29|2015-03-24|Emc Corporation|Data migration with source device reuse|
US8060710B1|2007-12-12|2011-11-15|Emc Corporation|Non-disruptive migration using device identity spoofing and passive/active ORS pull sessions|
US20090164528A1|2007-12-21|2009-06-25|Dell Products L.P.|Information Handling System Personalization|
US8341251B2|2008-01-03|2012-12-25|International Business Machines Corporation|Enabling storage area network component migration|
US20090193195A1|2008-01-25|2009-07-30|Cochran Robert A|Cache that stores data items associated with sticky indicators|
US9064132B1|2008-03-31|2015-06-23|Symantec Operating Corporation|Method for writing hardware encrypted backups on a per set basis|
US20090327837A1|2008-06-30|2009-12-31|Robert Royer|NAND error management|
JP5218284B2|2008-08-20|2013-06-26|富士通株式会社|Virtual disk management program, storage device management program, multi-node storage system, and virtual disk management method|
US20100070722A1|2008-09-16|2010-03-18|Toshio Otani|Method and apparatus for storage migration|
US8117413B2|2008-09-25|2012-02-14|International Business Machines Corporation|Logical data set migration|
JP2010079678A|2008-09-26|2010-04-08|Hitachi Ltd|Device for controlling storage switching|
US8677342B1|2008-10-17|2014-03-18|Honeywell International Inc.|System, method and apparatus for replacing wireless devices in a system|
CN101446926B|2008-11-10|2011-06-01|成都市华为赛门铁克科技有限公司|Method for storing power-fail data of cache memory, equipment and system thereof|
US20100138575A1|2008-12-01|2010-06-03|Micron Technology, Inc.|Devices, systems, and methods to synchronize simultaneous dma parallel processing of a single data stream by multiple devices|
US8140780B2|2008-12-31|2012-03-20|Micron Technology, Inc.|Systems, methods, and devices for configuring a device|
US20100174887A1|2009-01-07|2010-07-08|Micron Technology Inc.|Buses for Pattern-Recognition Processors|
JP5277991B2|2009-01-27|2013-08-28|富士通株式会社|Allocation control program, allocation control device, and allocation control method|
JP5229486B2|2009-02-16|2013-07-03|株式会社日立製作所|Management computer and processing management method|
US8307154B2|2009-03-03|2012-11-06|Kove Corporation|System and method for performing rapid data snapshots|
US8738872B2|2009-04-03|2014-05-27|Peter Chi-Hsiung Liu|Methods for migrating data in a server that remains substantially available for use during such migration|
JP5218252B2|2009-04-24|2013-06-26|富士通株式会社|Bus switch, computer system and computer system management method|
JP4990322B2|2009-05-13|2012-08-01|株式会社日立製作所|Data movement management device and information processing system|
JP4930553B2|2009-06-30|2012-05-16|富士通株式会社|Device having data migration function and data migration method|
US20120150527A1|2009-08-21|2012-06-14|Tadhg Creedon|Storage peripheral device emulation|
US8429360B1|2009-09-28|2013-04-23|Network Appliance, Inc.|Method and system for efficient migration of a storage object between storage servers based on an ancestry of the storage object in a network storage system|
JP5241671B2|2009-10-05|2013-07-17|株式会社日立製作所|Data migration control method for storage device|
US9323994B2|2009-12-15|2016-04-26|Micron Technology, Inc.|Multi-level hierarchical routing matrices for pattern-recognition processors|
EP2378435B1|2010-04-14|2019-08-28|Spotify AB|Method of setting up a redistribution scheme of a digital storage system|
EP2569710A4|2010-05-13|2014-01-22|Hewlett Packard Development Co|File system migration|
US20110289349A1|2010-05-24|2011-11-24|Cisco Technology, Inc.|System and Method for Monitoring and Repairing Memory|
JP5421201B2|2010-07-20|2014-02-19|株式会社日立製作所|Management system and management method for managing computer system|
US8793448B2|2010-07-29|2014-07-29|International Business Machines Corporation|Transparent data migration within a computing environment|
JP5595530B2|2010-10-14|2014-09-24|株式会社日立製作所|Data migration system and data migration method|
US8886900B2|2010-11-22|2014-11-11|International Business Machines Corporation|Legacy data management|
US9128942B1|2010-12-24|2015-09-08|Netapp, Inc.|On-demand operations|
US9069473B2|2011-01-27|2015-06-30|International Business Machines Corporation|Wait-free stream oriented migration based storage|
US8745034B1|2011-05-04|2014-06-03|Google Inc.|Selectively retrieving search results in accordance with predefined sort criteria|
US8819374B1|2011-06-15|2014-08-26|Emc Corporation|Techniques for performing data migration|
US9223502B2|2011-08-01|2015-12-29|Infinidat Ltd.|Method of migrating stored data and system thereof|
US8856191B2|2011-08-01|2014-10-07|Infinidat Ltd.|Method of migrating stored data and system thereof|
US9407433B1|2011-08-10|2016-08-02|Nutanix, Inc.|Mechanism for implementing key-based security for nodes within a networked virtualization environment for storage management|
US9043371B1|2011-11-04|2015-05-26|Google Inc.|Storing information in a trusted environment for use in processing data triggers in an untrusted environment|
US9058120B2|2011-11-09|2015-06-16|International Business Machines Corporation|Setting optimal space allocation policy for creating dependent snapshots to enhance application write performance and reduce resource usage|
US9148329B1|2011-11-30|2015-09-29|Google Inc.|Resource constraints for request processing|
US20130174176A1|2012-01-04|2013-07-04|Infinidat Ltd.|Workload management in a data storage system|
US9710397B2|2012-02-16|2017-07-18|Apple Inc.|Data migration for composite non-volatile storage device|
US9081503B2|2012-02-16|2015-07-14|Apple Inc.|Methods and systems for maintaining a storage volume with holes and filling holes|
US8914381B2|2012-02-16|2014-12-16|Apple Inc.|Correlation filter|
WO2013140492A1|2012-03-19|2013-09-26|富士通株式会社|Data access method and program|
US9235607B1|2012-03-29|2016-01-12|Google Inc.|Specifying a predetermined degree of inconsistency for test data|
US20130275546A1|2012-04-11|2013-10-17|AppSense, Inc.|Systems and methods for the automated migration from enterprise to cloud storage|
US9582524B1|2012-06-19|2017-02-28|Amazon Technologies, Inc.|Transformative migration of static data|
US8775861B1|2012-06-28|2014-07-08|Emc Corporation|Non-disruptive storage device migration in failover cluster environment|
US9524248B2|2012-07-18|2016-12-20|Micron Technology, Inc.|Memory management for a hierarchical memory system|
WO2014033945A1|2012-09-03|2014-03-06|株式会社日立製作所|Management system which manages computer system having plurality of devices to be monitored|
EP2898638B1|2012-09-21|2020-10-28|NYSE Group, Inc.|High performance data streaming|
US9460028B1|2012-12-27|2016-10-04|Emc Corporation|Non-disruptive and minimally disruptive data migration in active-active clusters|
WO2014106871A1|2013-01-07|2014-07-10|Hitachi, Ltd.|Storage system which realizes asynchronous remote copy using cache memory composed of flash memory, and control method thereof|
US10073851B2|2013-01-08|2018-09-11|Apple Inc.|Fast new file creation cache|
US9400611B1|2013-03-13|2016-07-26|Emc Corporation|Data migration in cluster environment using host copy and changed block tracking|
US9703574B2|2013-03-15|2017-07-11|Micron Technology, Inc.|Overflow detection and correction in state machine engines|
US9448965B2|2013-03-15|2016-09-20|Micron Technology, Inc.|Receiving data streams in parallel and providing a first portion of data to a first state machine engine and a second portion to a second state machine|
JP6142599B2|2013-03-18|2017-06-07|富士通株式会社|Storage system, storage device and control program|
IN2015DN01974A|2013-04-05|2015-08-14|Hitachi Ltd||
US9940019B2|2013-06-12|2018-04-10|International Business Machines Corporation|Online migration of a logical volume between storage systems|
US8819317B1|2013-06-12|2014-08-26|International Business Machines Corporation|Processing input/output requests using proxy and owner storage systems|
US9274989B2|2013-06-12|2016-03-01|International Business Machines Corporation|Impersonating SCSI ports through an intermediate proxy|
US9274916B2|2013-06-12|2016-03-01|International Business Machines Corporation|Unit attention processing in proxy and owner storage systems|
US9769062B2|2013-06-12|2017-09-19|International Business Machines Corporation|Load balancing input/output operations between two computers|
US9779003B2|2013-06-12|2017-10-03|International Business Machines Corporation|Safely mapping and unmapping host SCSI volumes|
EP2827286A3|2013-07-19|2015-03-25|Sears Brands, LLC|Method and system for migrating data between systems without downtime|
US9923762B1|2013-08-13|2018-03-20|Ca, Inc.|Upgrading an engine when a scenario is running|
US9298752B2|2013-08-26|2016-03-29|Dropbox, Inc.|Facilitating data migration between database clusters while the database continues operating|
US9317538B1|2013-09-10|2016-04-19|Ca, Inc.|Methods for generating data sets using catalog entries|
ZA201404975B|2014-01-30|2014-10-29|Attix5 Uk Ltd |Data migration method and systems|
US9087012B1|2014-06-04|2015-07-21|Pure Storage, Inc.|Disaster recovery at high reliability in a storage cluster|
US20150355862A1|2014-06-04|2015-12-10|Pure Storage, Inc.|Transparent array migration|
US9710186B2|2014-06-20|2017-07-18|Ca, Inc.|Performing online data migration with concurrent active user access to the data|
US9811677B2|2014-07-03|2017-11-07|Pure Storage, Inc.|Secure data replication in a storage grid|
US20160080490A1|2014-09-15|2016-03-17|Microsoft Corporation|Online data movement without compromising data integrity|
US10430210B2|2014-12-30|2019-10-01|Micron Technology, Inc.|Systems and devices for accessing a state machine|
WO2016109571A1|2014-12-30|2016-07-07|Micron Technology, Inc|Devices for time division multiplexing of state machine engine signals|
WO2016202364A1|2015-06-16|2016-12-22|Telefonaktiebolaget Lm Ericsson |A method of live migration|
US10698829B2|2015-07-27|2020-06-30|Datrium, Inc.|Direct host-to-host transfer for local cache in virtualized systems wherein hosting history stores previous hosts that serve as currently-designated host for said data object prior to migration of said data object, and said hosting history is checked during said migration|
US10691964B2|2015-10-06|2020-06-23|Micron Technology, Inc.|Methods and systems for event reporting|
US10846103B2|2015-10-06|2020-11-24|Micron Technology, Inc.|Methods and systems for representing processing resources|
US10977309B2|2015-10-06|2021-04-13|Micron Technology, Inc.|Methods and systems for creating networks|
US10061702B2|2015-11-13|2018-08-28|International Business Machines Corporation|Predictive analytics for storage tiering and caching|
JP6315000B2|2016-02-01|2018-04-25|日本電気株式会社|Storage management system and storage management method|
US10749986B2|2016-04-11|2020-08-18|Samsung Electronics Co., Ltd.|Platform for interaction via commands and entities|
US10942844B2|2016-06-10|2021-03-09|Apple Inc.|Reserved memory in memory management system|
US10146555B2|2016-07-21|2018-12-04|Micron Technology, Inc.|Adaptive routing to avoid non-repairable memory and logic defects on automata processor|
US10733159B2|2016-09-14|2020-08-04|Oracle International Corporation|Maintaining immutable data and mutable metadata in a storage system|
US10019311B2|2016-09-29|2018-07-10|Micron Technology, Inc.|Validation of a symbol response memory|
US10268602B2|2016-09-29|2019-04-23|Micron Technology, Inc.|System and method for individual addressing|
US10929764B2|2016-10-20|2021-02-23|Micron Technology, Inc.|Boolean satisfiability|
US10592450B2|2016-10-20|2020-03-17|Micron Technology, Inc.|Custom compute cores in integrated circuit devices|
US10860534B2|2016-10-27|2020-12-08|Oracle International Corporation|Executing a conditional command on an object stored in a storage system|
US10191936B2|2016-10-31|2019-01-29|Oracle International Corporation|Two-tier storage protocol for committing changes in a storage system|
US10169081B2|2016-10-31|2019-01-01|Oracle International Corporation|Use of concurrent time bucket generations for scalable scheduling of operations in a computer system|
US10180863B2|2016-10-31|2019-01-15|Oracle International Corporation|Determining system information based on object mutation events|
US10956051B2|2016-10-31|2021-03-23|Oracle International Corporation|Data-packed storage containers for streamlined access and migration|
US10275177B2|2016-10-31|2019-04-30|Oracle International Corporation|Data layout schemas for seamless data migration|
US10445061B1|2016-11-07|2019-10-15|Microsoft Technology Licensing, Llc|Matching entities during data migration|
JP6848060B2|2016-11-26|2021-03-24|華為技術有限公司Huawei Technologies Co.,Ltd.|Data migration method, host, and solid-state disk|
CN108268501B|2016-12-30|2020-09-18|中国移动通信集团北京有限公司|Service processing method and device in online data migration process|
US10318191B1|2017-07-18|2019-06-11|EMC IP Holding Company LLC|Migration and transformation of data storage in a replicated environment|
US10949354B2|2017-09-05|2021-03-16|International Business Machines Corporation|Distributed safe data commit in a data storage system|
US10769074B2|2017-11-09|2020-09-08|Microsoft Technology Licensing, Llc|Computer memory content movement|
US10430270B2|2017-12-04|2019-10-01|Bank Of America Corporation|System for migrating data using dynamic feedback|
US10592154B1|2018-01-31|2020-03-17|EMC IP Holding Company LLC|Accessing data previously migrated to a cloud|
US10942898B2|2018-04-30|2021-03-09|Microsoft Technology Licensing, Llc|System and method for a persistent hierarchical work manager|
US10972450B1|2019-04-15|2021-04-06|Wells Fargo Bank, N.A.|Systems and methods for securely migrating data between devices|
RU199929U1|2019-12-31|2020-09-29|Федеральное государственное бюджетное образовательное учреждение высшего образования «Московский государственный университет геодезии и картографии»|DEVICE FOR PROCESSING STREAMS OF SPACE-TIME DATA IN REAL TIME MODE|
法律状态:
2002-02-21| STCF| Information on status: patent grant|Free format text: PATENTED CASE |
2005-09-12| FPAY| Fee payment|Year of fee payment: 4 |
2009-09-14| FPAY| Fee payment|Year of fee payment: 8 |
2013-03-14| FPAY| Fee payment|Year of fee payment: 12 |
优先权:
申请号 | 申请日 | 专利标题
US08/522,903|US5680640A|1995-09-01|1995-09-01|System for migrating data by selecting a first or second transfer means based on the status of a data element map initialized to a predetermined state|
US08/807,331|US6108748A|1995-09-01|1997-02-28|System and method for on-line, real time, data migration|
US09/363,482|US6240486B1|1995-09-01|1999-07-29|System and method for on-line, real time, data migration|
US09/735,023|US6356977B2|1995-09-01|2000-12-12|System and method for on-line, real time, data migration|US09/735,023| US6356977B2|1995-09-01|2000-12-12|System and method for on-line, real time, data migration|
US09/943,052| US6598134B2|1995-09-01|2001-08-30|System and method for on-line, real time, data migration|
[返回顶部]